00:00:00.001 Started by upstream project "autotest-per-patch" build number 127090 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.129 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.130 The recommended git tool is: git 00:00:00.130 using credential 00000000-0000-0000-0000-000000000002 00:00:00.133 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.172 Fetching changes from the remote Git repository 00:00:00.174 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.219 Using shallow fetch with depth 1 00:00:00.219 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.219 > git --version # timeout=10 00:00:00.257 > git --version # 'git version 2.39.2' 00:00:00.257 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.290 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.290 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.939 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.950 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.964 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:04.964 > git config core.sparsecheckout # timeout=10 00:00:04.973 > git read-tree -mu HEAD # timeout=10 00:00:04.990 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:05.031 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:05.031 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:05.129 [Pipeline] Start of Pipeline 00:00:05.143 [Pipeline] library 00:00:05.144 Loading library shm_lib@master 00:00:05.144 Library shm_lib@master is cached. Copying from home. 00:00:05.160 [Pipeline] node 00:00:05.173 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.174 [Pipeline] { 00:00:05.184 [Pipeline] catchError 00:00:05.185 [Pipeline] { 00:00:05.196 [Pipeline] wrap 00:00:05.205 [Pipeline] { 00:00:05.211 [Pipeline] stage 00:00:05.212 [Pipeline] { (Prologue) 00:00:05.368 [Pipeline] sh 00:00:05.655 + logger -p user.info -t JENKINS-CI 00:00:05.670 [Pipeline] echo 00:00:05.671 Node: CYP9 00:00:05.678 [Pipeline] sh 00:00:05.977 [Pipeline] setCustomBuildProperty 00:00:05.985 [Pipeline] echo 00:00:05.986 Cleanup processes 00:00:05.989 [Pipeline] sh 00:00:06.274 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.274 3335221 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.286 [Pipeline] sh 00:00:06.571 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.571 ++ grep -v 'sudo pgrep' 00:00:06.571 ++ awk '{print $1}' 00:00:06.571 + sudo kill -9 00:00:06.571 + true 00:00:06.584 [Pipeline] cleanWs 00:00:06.592 [WS-CLEANUP] Deleting project workspace... 00:00:06.592 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.600 [WS-CLEANUP] done 00:00:06.602 [Pipeline] setCustomBuildProperty 00:00:06.611 [Pipeline] sh 00:00:06.896 + sudo git config --global --replace-all safe.directory '*' 00:00:06.961 [Pipeline] httpRequest 00:00:06.991 [Pipeline] echo 00:00:06.992 Sorcerer 10.211.164.101 is alive 00:00:06.998 [Pipeline] httpRequest 00:00:07.002 HttpMethod: GET 00:00:07.002 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:07.003 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:07.006 Response Code: HTTP/1.1 200 OK 00:00:07.006 Success: Status code 200 is in the accepted range: 200,404 00:00:07.007 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:08.994 [Pipeline] sh 00:00:09.282 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:09.339 [Pipeline] httpRequest 00:00:09.378 [Pipeline] echo 00:00:09.380 Sorcerer 10.211.164.101 is alive 00:00:09.390 [Pipeline] httpRequest 00:00:09.395 HttpMethod: GET 00:00:09.396 URL: http://10.211.164.101/packages/spdk_19f5787c83b0b216b0d74652a443f51bd9795701.tar.gz 00:00:09.397 Sending request to url: http://10.211.164.101/packages/spdk_19f5787c83b0b216b0d74652a443f51bd9795701.tar.gz 00:00:09.419 Response Code: HTTP/1.1 200 OK 00:00:09.420 Success: Status code 200 is in the accepted range: 200,404 00:00:09.420 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_19f5787c83b0b216b0d74652a443f51bd9795701.tar.gz 00:01:05.248 [Pipeline] sh 00:01:05.535 + tar --no-same-owner -xf spdk_19f5787c83b0b216b0d74652a443f51bd9795701.tar.gz 00:01:08.094 [Pipeline] sh 00:01:08.381 + git -C spdk log --oneline -n5 00:01:08.381 19f5787c8 raid: skip configured base bdevs in sb examine 00:01:08.381 3b9baa5f8 bdev/raid1: Support resize when increasing the size of base bdevs 00:01:08.381 25a9ccb98 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:08.381 38b03952e bdev/compress: check pm path for creating compress bdev 00:01:08.381 8711e7e9b autotest: reduce accel tests runs with SPDK_TEST_ACCEL flag 00:01:08.395 [Pipeline] } 00:01:08.413 [Pipeline] // stage 00:01:08.422 [Pipeline] stage 00:01:08.424 [Pipeline] { (Prepare) 00:01:08.442 [Pipeline] writeFile 00:01:08.459 [Pipeline] sh 00:01:08.746 + logger -p user.info -t JENKINS-CI 00:01:08.759 [Pipeline] sh 00:01:09.047 + logger -p user.info -t JENKINS-CI 00:01:09.059 [Pipeline] sh 00:01:09.344 + cat autorun-spdk.conf 00:01:09.344 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.344 SPDK_TEST_NVMF=1 00:01:09.344 SPDK_TEST_NVME_CLI=1 00:01:09.344 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:09.344 SPDK_TEST_NVMF_NICS=e810 00:01:09.344 SPDK_TEST_VFIOUSER=1 00:01:09.344 SPDK_RUN_UBSAN=1 00:01:09.344 NET_TYPE=phy 00:01:09.352 RUN_NIGHTLY=0 00:01:09.355 [Pipeline] readFile 00:01:09.376 [Pipeline] withEnv 00:01:09.378 [Pipeline] { 00:01:09.391 [Pipeline] sh 00:01:09.677 + set -ex 00:01:09.677 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:09.677 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:09.677 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.677 ++ SPDK_TEST_NVMF=1 00:01:09.677 ++ SPDK_TEST_NVME_CLI=1 00:01:09.677 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:09.677 ++ SPDK_TEST_NVMF_NICS=e810 00:01:09.677 ++ SPDK_TEST_VFIOUSER=1 00:01:09.677 ++ SPDK_RUN_UBSAN=1 00:01:09.677 ++ NET_TYPE=phy 00:01:09.677 ++ RUN_NIGHTLY=0 00:01:09.677 + case $SPDK_TEST_NVMF_NICS in 00:01:09.677 + DRIVERS=ice 00:01:09.677 + [[ tcp == \r\d\m\a ]] 00:01:09.677 + [[ -n ice ]] 00:01:09.677 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:09.677 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:09.677 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:09.677 rmmod: ERROR: Module irdma is not currently loaded 00:01:09.677 rmmod: ERROR: Module i40iw is not currently loaded 00:01:09.677 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:09.677 + true 00:01:09.677 + for D in $DRIVERS 00:01:09.677 + sudo modprobe ice 00:01:09.677 + exit 0 00:01:09.688 [Pipeline] } 00:01:09.704 [Pipeline] // withEnv 00:01:09.711 [Pipeline] } 00:01:09.726 [Pipeline] // stage 00:01:09.735 [Pipeline] catchError 00:01:09.737 [Pipeline] { 00:01:09.751 [Pipeline] timeout 00:01:09.752 Timeout set to expire in 50 min 00:01:09.754 [Pipeline] { 00:01:09.768 [Pipeline] stage 00:01:09.769 [Pipeline] { (Tests) 00:01:09.779 [Pipeline] sh 00:01:10.061 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:10.061 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:10.061 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:10.061 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:10.061 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:10.061 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:10.061 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:10.061 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:10.061 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:10.061 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:10.061 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:10.061 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:10.061 + source /etc/os-release 00:01:10.061 ++ NAME='Fedora Linux' 00:01:10.061 ++ VERSION='38 (Cloud Edition)' 00:01:10.061 ++ ID=fedora 00:01:10.061 ++ VERSION_ID=38 00:01:10.061 ++ VERSION_CODENAME= 00:01:10.061 ++ PLATFORM_ID=platform:f38 00:01:10.061 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:10.061 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:10.061 ++ LOGO=fedora-logo-icon 00:01:10.061 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:10.061 ++ HOME_URL=https://fedoraproject.org/ 00:01:10.061 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:10.061 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:10.061 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:10.061 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:10.061 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:10.061 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:10.061 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:10.061 ++ SUPPORT_END=2024-05-14 00:01:10.061 ++ VARIANT='Cloud Edition' 00:01:10.061 ++ VARIANT_ID=cloud 00:01:10.061 + uname -a 00:01:10.061 Linux spdk-cyp-09 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:10.061 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:13.366 Hugepages 00:01:13.366 node hugesize free / total 00:01:13.366 node0 1048576kB 0 / 0 00:01:13.366 node0 2048kB 0 / 0 00:01:13.366 node1 1048576kB 0 / 0 00:01:13.366 node1 2048kB 0 / 0 00:01:13.366 00:01:13.366 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:13.366 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:13.366 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:13.366 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:13.366 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:13.366 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:13.366 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:13.366 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:13.366 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:13.366 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:13.366 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:13.366 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:13.366 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:13.366 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:13.366 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:13.366 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:13.366 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:13.366 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:13.366 + rm -f /tmp/spdk-ld-path 00:01:13.366 + source autorun-spdk.conf 00:01:13.366 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.366 ++ SPDK_TEST_NVMF=1 00:01:13.366 ++ SPDK_TEST_NVME_CLI=1 00:01:13.366 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:13.366 ++ SPDK_TEST_NVMF_NICS=e810 00:01:13.366 ++ SPDK_TEST_VFIOUSER=1 00:01:13.366 ++ SPDK_RUN_UBSAN=1 00:01:13.366 ++ NET_TYPE=phy 00:01:13.366 ++ RUN_NIGHTLY=0 00:01:13.366 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:13.366 + [[ -n '' ]] 00:01:13.366 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:13.366 + for M in /var/spdk/build-*-manifest.txt 00:01:13.366 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:13.366 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:13.366 + for M in /var/spdk/build-*-manifest.txt 00:01:13.366 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:13.366 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:13.366 ++ uname 00:01:13.366 + [[ Linux == \L\i\n\u\x ]] 00:01:13.367 + sudo dmesg -T 00:01:13.367 + sudo dmesg --clear 00:01:13.367 + dmesg_pid=3336207 00:01:13.367 + [[ Fedora Linux == FreeBSD ]] 00:01:13.367 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:13.367 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:13.367 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:13.367 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:13.367 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:13.367 + [[ -x /usr/src/fio-static/fio ]] 00:01:13.367 + sudo dmesg -Tw 00:01:13.367 + export FIO_BIN=/usr/src/fio-static/fio 00:01:13.367 + FIO_BIN=/usr/src/fio-static/fio 00:01:13.367 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:13.367 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:13.367 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:13.367 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:13.367 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:13.367 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:13.367 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:13.367 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:13.367 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:13.367 Test configuration: 00:01:13.367 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.367 SPDK_TEST_NVMF=1 00:01:13.367 SPDK_TEST_NVME_CLI=1 00:01:13.367 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:13.367 SPDK_TEST_NVMF_NICS=e810 00:01:13.367 SPDK_TEST_VFIOUSER=1 00:01:13.367 SPDK_RUN_UBSAN=1 00:01:13.367 NET_TYPE=phy 00:01:13.367 RUN_NIGHTLY=0 19:41:01 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:13.367 19:41:01 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:13.367 19:41:01 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:13.367 19:41:01 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:13.367 19:41:01 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.367 19:41:01 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.367 19:41:01 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.367 19:41:01 -- paths/export.sh@5 -- $ export PATH 00:01:13.367 19:41:01 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.367 19:41:01 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:13.367 19:41:01 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:13.367 19:41:01 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721842861.XXXXXX 00:01:13.367 19:41:01 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721842861.fMuBQ3 00:01:13.367 19:41:01 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:13.367 19:41:01 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:13.367 19:41:01 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:13.367 19:41:01 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:13.367 19:41:01 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:13.367 19:41:01 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:13.367 19:41:01 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:13.367 19:41:01 -- common/autotest_common.sh@10 -- $ set +x 00:01:13.367 19:41:01 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:13.367 19:41:01 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:13.367 19:41:01 -- pm/common@17 -- $ local monitor 00:01:13.367 19:41:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.367 19:41:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.367 19:41:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.367 19:41:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.367 19:41:01 -- pm/common@21 -- $ date +%s 00:01:13.367 19:41:01 -- pm/common@25 -- $ sleep 1 00:01:13.367 19:41:01 -- pm/common@21 -- $ date +%s 00:01:13.367 19:41:01 -- pm/common@21 -- $ date +%s 00:01:13.367 19:41:01 -- pm/common@21 -- $ date +%s 00:01:13.367 19:41:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721842861 00:01:13.367 19:41:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721842861 00:01:13.367 19:41:01 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721842861 00:01:13.367 19:41:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721842861 00:01:13.367 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721842861_collect-vmstat.pm.log 00:01:13.367 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721842861_collect-cpu-load.pm.log 00:01:13.367 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721842861_collect-cpu-temp.pm.log 00:01:13.367 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721842861_collect-bmc-pm.bmc.pm.log 00:01:14.311 19:41:02 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:14.311 19:41:02 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:14.311 19:41:02 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:14.311 19:41:02 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:14.311 19:41:02 -- spdk/autobuild.sh@16 -- $ date -u 00:01:14.311 Wed Jul 24 05:41:02 PM UTC 2024 00:01:14.311 19:41:02 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:14.311 v24.09-pre-315-g19f5787c8 00:01:14.311 19:41:02 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:14.311 19:41:02 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:14.311 19:41:02 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:14.311 19:41:02 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:14.311 19:41:02 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:14.311 19:41:02 -- common/autotest_common.sh@10 -- $ set +x 00:01:14.311 ************************************ 00:01:14.311 START TEST ubsan 00:01:14.311 ************************************ 00:01:14.311 19:41:02 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:14.311 using ubsan 00:01:14.311 00:01:14.311 real 0m0.001s 00:01:14.311 user 0m0.000s 00:01:14.311 sys 0m0.001s 00:01:14.311 19:41:02 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:14.311 19:41:02 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:14.311 ************************************ 00:01:14.311 END TEST ubsan 00:01:14.311 ************************************ 00:01:14.572 19:41:02 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:14.572 19:41:02 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:14.572 19:41:02 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:14.572 19:41:02 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:14.572 19:41:02 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:14.572 19:41:02 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:14.572 19:41:02 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:14.572 19:41:02 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:14.572 19:41:02 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:14.572 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:14.572 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:14.834 Using 'verbs' RDMA provider 00:01:30.687 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:42.929 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:42.929 Creating mk/config.mk...done. 00:01:42.929 Creating mk/cc.flags.mk...done. 00:01:42.929 Type 'make' to build. 00:01:42.929 19:41:30 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:42.929 19:41:30 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:42.929 19:41:30 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:42.929 19:41:30 -- common/autotest_common.sh@10 -- $ set +x 00:01:42.929 ************************************ 00:01:42.929 START TEST make 00:01:42.929 ************************************ 00:01:42.929 19:41:30 make -- common/autotest_common.sh@1125 -- $ make -j144 00:01:43.190 make[1]: Nothing to be done for 'all'. 00:01:44.573 The Meson build system 00:01:44.573 Version: 1.3.1 00:01:44.573 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:44.573 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:44.573 Build type: native build 00:01:44.573 Project name: libvfio-user 00:01:44.573 Project version: 0.0.1 00:01:44.573 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:44.573 C linker for the host machine: cc ld.bfd 2.39-16 00:01:44.573 Host machine cpu family: x86_64 00:01:44.573 Host machine cpu: x86_64 00:01:44.573 Run-time dependency threads found: YES 00:01:44.573 Library dl found: YES 00:01:44.573 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:44.573 Run-time dependency json-c found: YES 0.17 00:01:44.573 Run-time dependency cmocka found: YES 1.1.7 00:01:44.573 Program pytest-3 found: NO 00:01:44.573 Program flake8 found: NO 00:01:44.573 Program misspell-fixer found: NO 00:01:44.573 Program restructuredtext-lint found: NO 00:01:44.573 Program valgrind found: YES (/usr/bin/valgrind) 00:01:44.573 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:44.573 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:44.573 Compiler for C supports arguments -Wwrite-strings: YES 00:01:44.574 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:44.574 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:44.574 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:44.574 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:44.574 Build targets in project: 8 00:01:44.574 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:44.574 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:44.574 00:01:44.574 libvfio-user 0.0.1 00:01:44.574 00:01:44.574 User defined options 00:01:44.574 buildtype : debug 00:01:44.574 default_library: shared 00:01:44.574 libdir : /usr/local/lib 00:01:44.574 00:01:44.574 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:44.574 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:44.574 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:44.574 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:44.574 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:44.574 [4/37] Compiling C object samples/null.p/null.c.o 00:01:44.832 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:44.832 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:44.832 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:44.832 [8/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:44.832 [9/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:44.832 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:44.832 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:44.832 [12/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:44.832 [13/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:44.832 [14/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:44.832 [15/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:44.832 [16/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:44.832 [17/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:44.832 [18/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:44.832 [19/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:44.832 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:44.832 [21/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:44.832 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:44.832 [23/37] Compiling C object samples/server.p/server.c.o 00:01:44.832 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:44.832 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:44.832 [26/37] Compiling C object samples/client.p/client.c.o 00:01:44.832 [27/37] Linking target samples/client 00:01:44.832 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:44.832 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:44.832 [30/37] Linking target test/unit_tests 00:01:44.832 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:45.092 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:45.092 [33/37] Linking target samples/server 00:01:45.092 [34/37] Linking target samples/gpio-pci-idio-16 00:01:45.092 [35/37] Linking target samples/null 00:01:45.092 [36/37] Linking target samples/lspci 00:01:45.092 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:45.092 INFO: autodetecting backend as ninja 00:01:45.092 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:45.092 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:45.354 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:45.354 ninja: no work to do. 00:01:51.948 The Meson build system 00:01:51.949 Version: 1.3.1 00:01:51.949 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:51.949 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:51.949 Build type: native build 00:01:51.949 Program cat found: YES (/usr/bin/cat) 00:01:51.949 Project name: DPDK 00:01:51.949 Project version: 24.03.0 00:01:51.949 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:51.949 C linker for the host machine: cc ld.bfd 2.39-16 00:01:51.949 Host machine cpu family: x86_64 00:01:51.949 Host machine cpu: x86_64 00:01:51.949 Message: ## Building in Developer Mode ## 00:01:51.949 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:51.949 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:51.949 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:51.949 Program python3 found: YES (/usr/bin/python3) 00:01:51.949 Program cat found: YES (/usr/bin/cat) 00:01:51.949 Compiler for C supports arguments -march=native: YES 00:01:51.949 Checking for size of "void *" : 8 00:01:51.949 Checking for size of "void *" : 8 (cached) 00:01:51.949 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:51.949 Library m found: YES 00:01:51.949 Library numa found: YES 00:01:51.949 Has header "numaif.h" : YES 00:01:51.949 Library fdt found: NO 00:01:51.949 Library execinfo found: NO 00:01:51.949 Has header "execinfo.h" : YES 00:01:51.949 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:51.949 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:51.949 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:51.949 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:51.949 Run-time dependency openssl found: YES 3.0.9 00:01:51.949 Run-time dependency libpcap found: YES 1.10.4 00:01:51.949 Has header "pcap.h" with dependency libpcap: YES 00:01:51.949 Compiler for C supports arguments -Wcast-qual: YES 00:01:51.949 Compiler for C supports arguments -Wdeprecated: YES 00:01:51.949 Compiler for C supports arguments -Wformat: YES 00:01:51.949 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:51.949 Compiler for C supports arguments -Wformat-security: NO 00:01:51.949 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:51.949 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:51.949 Compiler for C supports arguments -Wnested-externs: YES 00:01:51.949 Compiler for C supports arguments -Wold-style-definition: YES 00:01:51.949 Compiler for C supports arguments -Wpointer-arith: YES 00:01:51.949 Compiler for C supports arguments -Wsign-compare: YES 00:01:51.949 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:51.949 Compiler for C supports arguments -Wundef: YES 00:01:51.949 Compiler for C supports arguments -Wwrite-strings: YES 00:01:51.949 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:51.949 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:51.949 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:51.949 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:51.949 Program objdump found: YES (/usr/bin/objdump) 00:01:51.949 Compiler for C supports arguments -mavx512f: YES 00:01:51.949 Checking if "AVX512 checking" compiles: YES 00:01:51.949 Fetching value of define "__SSE4_2__" : 1 00:01:51.949 Fetching value of define "__AES__" : 1 00:01:51.949 Fetching value of define "__AVX__" : 1 00:01:51.949 Fetching value of define "__AVX2__" : 1 00:01:51.949 Fetching value of define "__AVX512BW__" : 1 00:01:51.949 Fetching value of define "__AVX512CD__" : 1 00:01:51.949 Fetching value of define "__AVX512DQ__" : 1 00:01:51.949 Fetching value of define "__AVX512F__" : 1 00:01:51.949 Fetching value of define "__AVX512VL__" : 1 00:01:51.949 Fetching value of define "__PCLMUL__" : 1 00:01:51.949 Fetching value of define "__RDRND__" : 1 00:01:51.949 Fetching value of define "__RDSEED__" : 1 00:01:51.949 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:51.949 Fetching value of define "__znver1__" : (undefined) 00:01:51.949 Fetching value of define "__znver2__" : (undefined) 00:01:51.949 Fetching value of define "__znver3__" : (undefined) 00:01:51.949 Fetching value of define "__znver4__" : (undefined) 00:01:51.949 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:51.949 Message: lib/log: Defining dependency "log" 00:01:51.949 Message: lib/kvargs: Defining dependency "kvargs" 00:01:51.949 Message: lib/telemetry: Defining dependency "telemetry" 00:01:51.949 Checking for function "getentropy" : NO 00:01:51.949 Message: lib/eal: Defining dependency "eal" 00:01:51.949 Message: lib/ring: Defining dependency "ring" 00:01:51.949 Message: lib/rcu: Defining dependency "rcu" 00:01:51.949 Message: lib/mempool: Defining dependency "mempool" 00:01:51.949 Message: lib/mbuf: Defining dependency "mbuf" 00:01:51.949 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:51.949 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:51.949 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:51.949 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:51.949 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:51.949 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:51.949 Compiler for C supports arguments -mpclmul: YES 00:01:51.949 Compiler for C supports arguments -maes: YES 00:01:51.949 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:51.949 Compiler for C supports arguments -mavx512bw: YES 00:01:51.949 Compiler for C supports arguments -mavx512dq: YES 00:01:51.949 Compiler for C supports arguments -mavx512vl: YES 00:01:51.949 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:51.949 Compiler for C supports arguments -mavx2: YES 00:01:51.949 Compiler for C supports arguments -mavx: YES 00:01:51.949 Message: lib/net: Defining dependency "net" 00:01:51.949 Message: lib/meter: Defining dependency "meter" 00:01:51.949 Message: lib/ethdev: Defining dependency "ethdev" 00:01:51.949 Message: lib/pci: Defining dependency "pci" 00:01:51.949 Message: lib/cmdline: Defining dependency "cmdline" 00:01:51.949 Message: lib/hash: Defining dependency "hash" 00:01:51.949 Message: lib/timer: Defining dependency "timer" 00:01:51.949 Message: lib/compressdev: Defining dependency "compressdev" 00:01:51.949 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:51.949 Message: lib/dmadev: Defining dependency "dmadev" 00:01:51.949 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:51.949 Message: lib/power: Defining dependency "power" 00:01:51.949 Message: lib/reorder: Defining dependency "reorder" 00:01:51.949 Message: lib/security: Defining dependency "security" 00:01:51.949 Has header "linux/userfaultfd.h" : YES 00:01:51.949 Has header "linux/vduse.h" : YES 00:01:51.949 Message: lib/vhost: Defining dependency "vhost" 00:01:51.949 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:51.949 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:51.949 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:51.949 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:51.949 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:51.949 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:51.949 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:51.949 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:51.949 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:51.949 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:51.949 Program doxygen found: YES (/usr/bin/doxygen) 00:01:51.949 Configuring doxy-api-html.conf using configuration 00:01:51.949 Configuring doxy-api-man.conf using configuration 00:01:51.949 Program mandb found: YES (/usr/bin/mandb) 00:01:51.949 Program sphinx-build found: NO 00:01:51.949 Configuring rte_build_config.h using configuration 00:01:51.949 Message: 00:01:51.949 ================= 00:01:51.949 Applications Enabled 00:01:51.949 ================= 00:01:51.949 00:01:51.949 apps: 00:01:51.949 00:01:51.949 00:01:51.949 Message: 00:01:51.949 ================= 00:01:51.949 Libraries Enabled 00:01:51.949 ================= 00:01:51.949 00:01:51.949 libs: 00:01:51.949 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:51.949 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:51.949 cryptodev, dmadev, power, reorder, security, vhost, 00:01:51.949 00:01:51.949 Message: 00:01:51.949 =============== 00:01:51.949 Drivers Enabled 00:01:51.949 =============== 00:01:51.949 00:01:51.949 common: 00:01:51.949 00:01:51.949 bus: 00:01:51.949 pci, vdev, 00:01:51.949 mempool: 00:01:51.949 ring, 00:01:51.949 dma: 00:01:51.949 00:01:51.949 net: 00:01:51.949 00:01:51.949 crypto: 00:01:51.949 00:01:51.949 compress: 00:01:51.949 00:01:51.949 vdpa: 00:01:51.949 00:01:51.949 00:01:51.949 Message: 00:01:51.949 ================= 00:01:51.949 Content Skipped 00:01:51.949 ================= 00:01:51.949 00:01:51.949 apps: 00:01:51.949 dumpcap: explicitly disabled via build config 00:01:51.949 graph: explicitly disabled via build config 00:01:51.949 pdump: explicitly disabled via build config 00:01:51.949 proc-info: explicitly disabled via build config 00:01:51.949 test-acl: explicitly disabled via build config 00:01:51.949 test-bbdev: explicitly disabled via build config 00:01:51.949 test-cmdline: explicitly disabled via build config 00:01:51.949 test-compress-perf: explicitly disabled via build config 00:01:51.949 test-crypto-perf: explicitly disabled via build config 00:01:51.949 test-dma-perf: explicitly disabled via build config 00:01:51.949 test-eventdev: explicitly disabled via build config 00:01:51.949 test-fib: explicitly disabled via build config 00:01:51.949 test-flow-perf: explicitly disabled via build config 00:01:51.950 test-gpudev: explicitly disabled via build config 00:01:51.950 test-mldev: explicitly disabled via build config 00:01:51.950 test-pipeline: explicitly disabled via build config 00:01:51.950 test-pmd: explicitly disabled via build config 00:01:51.950 test-regex: explicitly disabled via build config 00:01:51.950 test-sad: explicitly disabled via build config 00:01:51.950 test-security-perf: explicitly disabled via build config 00:01:51.950 00:01:51.950 libs: 00:01:51.950 argparse: explicitly disabled via build config 00:01:51.950 metrics: explicitly disabled via build config 00:01:51.950 acl: explicitly disabled via build config 00:01:51.950 bbdev: explicitly disabled via build config 00:01:51.950 bitratestats: explicitly disabled via build config 00:01:51.950 bpf: explicitly disabled via build config 00:01:51.950 cfgfile: explicitly disabled via build config 00:01:51.950 distributor: explicitly disabled via build config 00:01:51.950 efd: explicitly disabled via build config 00:01:51.950 eventdev: explicitly disabled via build config 00:01:51.950 dispatcher: explicitly disabled via build config 00:01:51.950 gpudev: explicitly disabled via build config 00:01:51.950 gro: explicitly disabled via build config 00:01:51.950 gso: explicitly disabled via build config 00:01:51.950 ip_frag: explicitly disabled via build config 00:01:51.950 jobstats: explicitly disabled via build config 00:01:51.950 latencystats: explicitly disabled via build config 00:01:51.950 lpm: explicitly disabled via build config 00:01:51.950 member: explicitly disabled via build config 00:01:51.950 pcapng: explicitly disabled via build config 00:01:51.950 rawdev: explicitly disabled via build config 00:01:51.950 regexdev: explicitly disabled via build config 00:01:51.950 mldev: explicitly disabled via build config 00:01:51.950 rib: explicitly disabled via build config 00:01:51.950 sched: explicitly disabled via build config 00:01:51.950 stack: explicitly disabled via build config 00:01:51.950 ipsec: explicitly disabled via build config 00:01:51.950 pdcp: explicitly disabled via build config 00:01:51.950 fib: explicitly disabled via build config 00:01:51.950 port: explicitly disabled via build config 00:01:51.950 pdump: explicitly disabled via build config 00:01:51.950 table: explicitly disabled via build config 00:01:51.950 pipeline: explicitly disabled via build config 00:01:51.950 graph: explicitly disabled via build config 00:01:51.950 node: explicitly disabled via build config 00:01:51.950 00:01:51.950 drivers: 00:01:51.950 common/cpt: not in enabled drivers build config 00:01:51.950 common/dpaax: not in enabled drivers build config 00:01:51.950 common/iavf: not in enabled drivers build config 00:01:51.950 common/idpf: not in enabled drivers build config 00:01:51.950 common/ionic: not in enabled drivers build config 00:01:51.950 common/mvep: not in enabled drivers build config 00:01:51.950 common/octeontx: not in enabled drivers build config 00:01:51.950 bus/auxiliary: not in enabled drivers build config 00:01:51.950 bus/cdx: not in enabled drivers build config 00:01:51.950 bus/dpaa: not in enabled drivers build config 00:01:51.950 bus/fslmc: not in enabled drivers build config 00:01:51.950 bus/ifpga: not in enabled drivers build config 00:01:51.950 bus/platform: not in enabled drivers build config 00:01:51.950 bus/uacce: not in enabled drivers build config 00:01:51.950 bus/vmbus: not in enabled drivers build config 00:01:51.950 common/cnxk: not in enabled drivers build config 00:01:51.950 common/mlx5: not in enabled drivers build config 00:01:51.950 common/nfp: not in enabled drivers build config 00:01:51.950 common/nitrox: not in enabled drivers build config 00:01:51.950 common/qat: not in enabled drivers build config 00:01:51.950 common/sfc_efx: not in enabled drivers build config 00:01:51.950 mempool/bucket: not in enabled drivers build config 00:01:51.950 mempool/cnxk: not in enabled drivers build config 00:01:51.950 mempool/dpaa: not in enabled drivers build config 00:01:51.950 mempool/dpaa2: not in enabled drivers build config 00:01:51.950 mempool/octeontx: not in enabled drivers build config 00:01:51.950 mempool/stack: not in enabled drivers build config 00:01:51.950 dma/cnxk: not in enabled drivers build config 00:01:51.950 dma/dpaa: not in enabled drivers build config 00:01:51.950 dma/dpaa2: not in enabled drivers build config 00:01:51.950 dma/hisilicon: not in enabled drivers build config 00:01:51.950 dma/idxd: not in enabled drivers build config 00:01:51.950 dma/ioat: not in enabled drivers build config 00:01:51.950 dma/skeleton: not in enabled drivers build config 00:01:51.950 net/af_packet: not in enabled drivers build config 00:01:51.950 net/af_xdp: not in enabled drivers build config 00:01:51.950 net/ark: not in enabled drivers build config 00:01:51.950 net/atlantic: not in enabled drivers build config 00:01:51.950 net/avp: not in enabled drivers build config 00:01:51.950 net/axgbe: not in enabled drivers build config 00:01:51.950 net/bnx2x: not in enabled drivers build config 00:01:51.950 net/bnxt: not in enabled drivers build config 00:01:51.950 net/bonding: not in enabled drivers build config 00:01:51.950 net/cnxk: not in enabled drivers build config 00:01:51.950 net/cpfl: not in enabled drivers build config 00:01:51.950 net/cxgbe: not in enabled drivers build config 00:01:51.950 net/dpaa: not in enabled drivers build config 00:01:51.950 net/dpaa2: not in enabled drivers build config 00:01:51.950 net/e1000: not in enabled drivers build config 00:01:51.950 net/ena: not in enabled drivers build config 00:01:51.950 net/enetc: not in enabled drivers build config 00:01:51.950 net/enetfec: not in enabled drivers build config 00:01:51.950 net/enic: not in enabled drivers build config 00:01:51.950 net/failsafe: not in enabled drivers build config 00:01:51.950 net/fm10k: not in enabled drivers build config 00:01:51.950 net/gve: not in enabled drivers build config 00:01:51.950 net/hinic: not in enabled drivers build config 00:01:51.950 net/hns3: not in enabled drivers build config 00:01:51.950 net/i40e: not in enabled drivers build config 00:01:51.950 net/iavf: not in enabled drivers build config 00:01:51.950 net/ice: not in enabled drivers build config 00:01:51.950 net/idpf: not in enabled drivers build config 00:01:51.950 net/igc: not in enabled drivers build config 00:01:51.950 net/ionic: not in enabled drivers build config 00:01:51.950 net/ipn3ke: not in enabled drivers build config 00:01:51.950 net/ixgbe: not in enabled drivers build config 00:01:51.950 net/mana: not in enabled drivers build config 00:01:51.950 net/memif: not in enabled drivers build config 00:01:51.950 net/mlx4: not in enabled drivers build config 00:01:51.950 net/mlx5: not in enabled drivers build config 00:01:51.950 net/mvneta: not in enabled drivers build config 00:01:51.950 net/mvpp2: not in enabled drivers build config 00:01:51.950 net/netvsc: not in enabled drivers build config 00:01:51.950 net/nfb: not in enabled drivers build config 00:01:51.950 net/nfp: not in enabled drivers build config 00:01:51.950 net/ngbe: not in enabled drivers build config 00:01:51.950 net/null: not in enabled drivers build config 00:01:51.950 net/octeontx: not in enabled drivers build config 00:01:51.950 net/octeon_ep: not in enabled drivers build config 00:01:51.950 net/pcap: not in enabled drivers build config 00:01:51.950 net/pfe: not in enabled drivers build config 00:01:51.950 net/qede: not in enabled drivers build config 00:01:51.950 net/ring: not in enabled drivers build config 00:01:51.950 net/sfc: not in enabled drivers build config 00:01:51.950 net/softnic: not in enabled drivers build config 00:01:51.950 net/tap: not in enabled drivers build config 00:01:51.950 net/thunderx: not in enabled drivers build config 00:01:51.950 net/txgbe: not in enabled drivers build config 00:01:51.950 net/vdev_netvsc: not in enabled drivers build config 00:01:51.950 net/vhost: not in enabled drivers build config 00:01:51.950 net/virtio: not in enabled drivers build config 00:01:51.950 net/vmxnet3: not in enabled drivers build config 00:01:51.950 raw/*: missing internal dependency, "rawdev" 00:01:51.950 crypto/armv8: not in enabled drivers build config 00:01:51.950 crypto/bcmfs: not in enabled drivers build config 00:01:51.950 crypto/caam_jr: not in enabled drivers build config 00:01:51.950 crypto/ccp: not in enabled drivers build config 00:01:51.950 crypto/cnxk: not in enabled drivers build config 00:01:51.950 crypto/dpaa_sec: not in enabled drivers build config 00:01:51.950 crypto/dpaa2_sec: not in enabled drivers build config 00:01:51.950 crypto/ipsec_mb: not in enabled drivers build config 00:01:51.950 crypto/mlx5: not in enabled drivers build config 00:01:51.950 crypto/mvsam: not in enabled drivers build config 00:01:51.950 crypto/nitrox: not in enabled drivers build config 00:01:51.950 crypto/null: not in enabled drivers build config 00:01:51.950 crypto/octeontx: not in enabled drivers build config 00:01:51.950 crypto/openssl: not in enabled drivers build config 00:01:51.950 crypto/scheduler: not in enabled drivers build config 00:01:51.950 crypto/uadk: not in enabled drivers build config 00:01:51.950 crypto/virtio: not in enabled drivers build config 00:01:51.950 compress/isal: not in enabled drivers build config 00:01:51.950 compress/mlx5: not in enabled drivers build config 00:01:51.950 compress/nitrox: not in enabled drivers build config 00:01:51.950 compress/octeontx: not in enabled drivers build config 00:01:51.950 compress/zlib: not in enabled drivers build config 00:01:51.950 regex/*: missing internal dependency, "regexdev" 00:01:51.950 ml/*: missing internal dependency, "mldev" 00:01:51.950 vdpa/ifc: not in enabled drivers build config 00:01:51.950 vdpa/mlx5: not in enabled drivers build config 00:01:51.950 vdpa/nfp: not in enabled drivers build config 00:01:51.950 vdpa/sfc: not in enabled drivers build config 00:01:51.950 event/*: missing internal dependency, "eventdev" 00:01:51.950 baseband/*: missing internal dependency, "bbdev" 00:01:51.950 gpu/*: missing internal dependency, "gpudev" 00:01:51.950 00:01:51.950 00:01:51.950 Build targets in project: 84 00:01:51.950 00:01:51.950 DPDK 24.03.0 00:01:51.950 00:01:51.950 User defined options 00:01:51.950 buildtype : debug 00:01:51.950 default_library : shared 00:01:51.950 libdir : lib 00:01:51.950 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:51.950 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:51.950 c_link_args : 00:01:51.950 cpu_instruction_set: native 00:01:51.951 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:51.951 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:51.951 enable_docs : false 00:01:51.951 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:51.951 enable_kmods : false 00:01:51.951 max_lcores : 128 00:01:51.951 tests : false 00:01:51.951 00:01:51.951 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:51.951 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:51.951 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:51.951 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:51.951 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:51.951 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:51.951 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:51.951 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:51.951 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:51.951 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:51.951 [9/267] Linking static target lib/librte_kvargs.a 00:01:51.951 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:51.951 [11/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:51.951 [12/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:51.951 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:51.951 [14/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:51.951 [15/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:51.951 [16/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:51.951 [17/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:52.212 [18/267] Linking static target lib/librte_log.a 00:01:52.212 [19/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:52.212 [20/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:52.212 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:52.212 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:52.212 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:52.212 [24/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:52.212 [25/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:52.212 [26/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:52.212 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:52.212 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:52.212 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:52.212 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:52.212 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:52.212 [32/267] Linking static target lib/librte_pci.a 00:01:52.212 [33/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:52.212 [34/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:52.212 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:52.212 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:52.212 [37/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:52.212 [38/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:52.472 [39/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:52.472 [40/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.472 [41/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:52.472 [42/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:52.472 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:52.472 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:52.472 [45/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:52.472 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:52.472 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:52.472 [48/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:52.472 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:52.472 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:52.472 [51/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.472 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:52.472 [53/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:52.472 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:52.472 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:52.472 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:52.472 [57/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:52.472 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:52.472 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:52.472 [60/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:52.472 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:52.472 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:52.472 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:52.472 [64/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:52.472 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:52.472 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:52.472 [67/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:52.472 [68/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:52.472 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:52.472 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:52.472 [71/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:52.472 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:52.472 [73/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:52.472 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:52.472 [75/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:52.472 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:52.472 [77/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:52.472 [78/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:52.472 [79/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:52.472 [80/267] Linking static target lib/librte_timer.a 00:01:52.472 [81/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:52.472 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:52.472 [83/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:52.472 [84/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:52.472 [85/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:52.472 [86/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:52.472 [87/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:52.472 [88/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:52.472 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:52.472 [90/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:52.472 [91/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:52.472 [92/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:52.472 [93/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:52.472 [94/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:52.472 [95/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:52.472 [96/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:52.472 [97/267] Linking static target lib/librte_meter.a 00:01:52.472 [98/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:52.472 [99/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:52.472 [100/267] Linking static target lib/librte_telemetry.a 00:01:52.472 [101/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:52.472 [102/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:52.472 [103/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:52.472 [104/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:52.472 [105/267] Linking static target lib/librte_ring.a 00:01:52.472 [106/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:52.472 [107/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:52.472 [108/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:52.472 [109/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:52.472 [110/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:52.472 [111/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:52.472 [112/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:52.472 [113/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:52.472 [114/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:52.472 [115/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:52.733 [116/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:52.733 [117/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:52.733 [118/267] Linking static target lib/librte_dmadev.a 00:01:52.733 [119/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:52.733 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:52.733 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:52.733 [122/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:52.733 [123/267] Linking static target lib/librte_cmdline.a 00:01:52.733 [124/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:52.733 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:52.733 [126/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:52.733 [127/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:52.733 [128/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:52.733 [129/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:52.733 [130/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:52.733 [131/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:52.733 [132/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:52.733 [133/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:52.733 [134/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:52.733 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:52.733 [136/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:52.733 [137/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:52.733 [138/267] Linking static target lib/librte_rcu.a 00:01:52.733 [139/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:52.733 [140/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:52.733 [141/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:52.733 [142/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:52.733 [143/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:52.733 [144/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:52.733 [145/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:52.733 [146/267] Linking static target lib/librte_net.a 00:01:52.733 [147/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:52.733 [148/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:52.733 [149/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:52.733 [150/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:52.733 [151/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:52.733 [152/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:52.733 [153/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.733 [154/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:52.733 [155/267] Linking static target lib/librte_power.a 00:01:52.733 [156/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:52.733 [157/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:52.733 [158/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:52.733 [159/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:52.733 [160/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:52.733 [161/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:52.733 [162/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:52.733 [163/267] Linking target lib/librte_log.so.24.1 00:01:52.733 [164/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:52.733 [165/267] Linking static target lib/librte_reorder.a 00:01:52.733 [166/267] Linking static target lib/librte_compressdev.a 00:01:52.733 [167/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:52.733 [168/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:52.733 [169/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:52.733 [170/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:52.733 [171/267] Linking static target lib/librte_security.a 00:01:52.733 [172/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:52.733 [173/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:52.733 [174/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:52.733 [175/267] Linking static target lib/librte_mempool.a 00:01:52.733 [176/267] Linking static target lib/librte_eal.a 00:01:52.733 [177/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:52.733 [178/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:52.733 [179/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:52.733 [180/267] Linking static target lib/librte_mbuf.a 00:01:52.733 [181/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:52.733 [182/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.733 [183/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:52.995 [184/267] Linking target lib/librte_kvargs.so.24.1 00:01:52.995 [185/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:52.995 [186/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.995 [187/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:52.995 [188/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:52.995 [189/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:52.995 [190/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:52.995 [191/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:52.995 [192/267] Linking static target drivers/librte_bus_pci.a 00:01:52.995 [193/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:52.995 [194/267] Linking static target drivers/librte_bus_vdev.a 00:01:52.995 [195/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:52.995 [196/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:52.995 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:52.995 [198/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:52.995 [199/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.995 [200/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:52.995 [201/267] Linking static target lib/librte_hash.a 00:01:52.995 [202/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:52.995 [203/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:52.995 [204/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.995 [205/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.995 [206/267] Linking static target drivers/librte_mempool_ring.a 00:01:52.995 [207/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:52.995 [208/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:52.995 [209/267] Linking static target lib/librte_cryptodev.a 00:01:53.256 [210/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.256 [211/267] Linking target lib/librte_telemetry.so.24.1 00:01:53.256 [212/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.256 [213/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.256 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:53.572 [215/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.572 [216/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.572 [217/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:53.572 [218/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:53.572 [219/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.572 [220/267] Linking static target lib/librte_ethdev.a 00:01:53.832 [221/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.832 [222/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.832 [223/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.832 [224/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.832 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.093 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.666 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:54.666 [228/267] Linking static target lib/librte_vhost.a 00:01:55.238 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.626 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.217 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.603 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.603 [233/267] Linking target lib/librte_eal.so.24.1 00:02:04.864 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:04.864 [235/267] Linking target lib/librte_timer.so.24.1 00:02:04.864 [236/267] Linking target lib/librte_ring.so.24.1 00:02:04.864 [237/267] Linking target lib/librte_meter.so.24.1 00:02:04.864 [238/267] Linking target lib/librte_pci.so.24.1 00:02:04.864 [239/267] Linking target lib/librte_dmadev.so.24.1 00:02:04.864 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:04.864 [241/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:04.864 [242/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:04.864 [243/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:04.864 [244/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:04.864 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:04.864 [246/267] Linking target lib/librte_mempool.so.24.1 00:02:05.126 [247/267] Linking target lib/librte_rcu.so.24.1 00:02:05.126 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:05.126 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:05.126 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:05.126 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:05.126 [252/267] Linking target lib/librte_mbuf.so.24.1 00:02:05.387 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:05.387 [254/267] Linking target lib/librte_compressdev.so.24.1 00:02:05.387 [255/267] Linking target lib/librte_net.so.24.1 00:02:05.387 [256/267] Linking target lib/librte_reorder.so.24.1 00:02:05.387 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:02:05.387 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:05.387 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:05.655 [260/267] Linking target lib/librte_hash.so.24.1 00:02:05.655 [261/267] Linking target lib/librte_security.so.24.1 00:02:05.655 [262/267] Linking target lib/librte_cmdline.so.24.1 00:02:05.655 [263/267] Linking target lib/librte_ethdev.so.24.1 00:02:05.655 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:05.655 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:05.655 [266/267] Linking target lib/librte_power.so.24.1 00:02:05.915 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:05.915 INFO: autodetecting backend as ninja 00:02:05.915 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:06.859 CC lib/ut_mock/mock.o 00:02:06.859 CC lib/log/log.o 00:02:06.859 CC lib/log/log_flags.o 00:02:06.859 CC lib/ut/ut.o 00:02:06.859 CC lib/log/log_deprecated.o 00:02:07.121 LIB libspdk_ut_mock.a 00:02:07.121 LIB libspdk_log.a 00:02:07.121 LIB libspdk_ut.a 00:02:07.121 SO libspdk_ut_mock.so.6.0 00:02:07.121 SO libspdk_ut.so.2.0 00:02:07.121 SO libspdk_log.so.7.0 00:02:07.121 SYMLINK libspdk_ut_mock.so 00:02:07.121 SYMLINK libspdk_ut.so 00:02:07.121 SYMLINK libspdk_log.so 00:02:07.694 CXX lib/trace_parser/trace.o 00:02:07.694 CC lib/util/base64.o 00:02:07.694 CC lib/util/bit_array.o 00:02:07.694 CC lib/util/cpuset.o 00:02:07.694 CC lib/util/crc16.o 00:02:07.694 CC lib/dma/dma.o 00:02:07.694 CC lib/util/crc32.o 00:02:07.694 CC lib/ioat/ioat.o 00:02:07.694 CC lib/util/crc32c.o 00:02:07.694 CC lib/util/crc32_ieee.o 00:02:07.694 CC lib/util/crc64.o 00:02:07.694 CC lib/util/dif.o 00:02:07.694 CC lib/util/fd.o 00:02:07.694 CC lib/util/fd_group.o 00:02:07.694 CC lib/util/file.o 00:02:07.694 CC lib/util/hexlify.o 00:02:07.694 CC lib/util/iov.o 00:02:07.694 CC lib/util/math.o 00:02:07.694 CC lib/util/net.o 00:02:07.694 CC lib/util/pipe.o 00:02:07.694 CC lib/util/strerror_tls.o 00:02:07.694 CC lib/util/string.o 00:02:07.694 CC lib/util/uuid.o 00:02:07.694 CC lib/util/xor.o 00:02:07.694 CC lib/util/zipf.o 00:02:07.694 CC lib/vfio_user/host/vfio_user_pci.o 00:02:07.694 CC lib/vfio_user/host/vfio_user.o 00:02:07.694 LIB libspdk_dma.a 00:02:07.955 SO libspdk_dma.so.4.0 00:02:07.955 LIB libspdk_ioat.a 00:02:07.955 SYMLINK libspdk_dma.so 00:02:07.955 SO libspdk_ioat.so.7.0 00:02:07.955 LIB libspdk_vfio_user.a 00:02:07.955 SYMLINK libspdk_ioat.so 00:02:07.955 SO libspdk_vfio_user.so.5.0 00:02:07.955 LIB libspdk_util.a 00:02:08.217 SYMLINK libspdk_vfio_user.so 00:02:08.217 SO libspdk_util.so.10.0 00:02:08.217 SYMLINK libspdk_util.so 00:02:08.479 LIB libspdk_trace_parser.a 00:02:08.479 SO libspdk_trace_parser.so.5.0 00:02:08.479 SYMLINK libspdk_trace_parser.so 00:02:08.739 CC lib/conf/conf.o 00:02:08.739 CC lib/json/json_parse.o 00:02:08.739 CC lib/json/json_util.o 00:02:08.739 CC lib/json/json_write.o 00:02:08.739 CC lib/idxd/idxd.o 00:02:08.739 CC lib/rdma_provider/common.o 00:02:08.739 CC lib/idxd/idxd_kernel.o 00:02:08.739 CC lib/vmd/vmd.o 00:02:08.739 CC lib/idxd/idxd_user.o 00:02:08.739 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:08.739 CC lib/vmd/led.o 00:02:08.739 CC lib/rdma_utils/rdma_utils.o 00:02:08.739 CC lib/env_dpdk/env.o 00:02:08.739 CC lib/env_dpdk/memory.o 00:02:08.739 CC lib/env_dpdk/pci.o 00:02:08.739 CC lib/env_dpdk/init.o 00:02:08.739 CC lib/env_dpdk/threads.o 00:02:08.739 CC lib/env_dpdk/pci_ioat.o 00:02:08.740 CC lib/env_dpdk/pci_virtio.o 00:02:08.740 CC lib/env_dpdk/pci_vmd.o 00:02:08.740 CC lib/env_dpdk/pci_idxd.o 00:02:08.740 CC lib/env_dpdk/pci_event.o 00:02:08.740 CC lib/env_dpdk/sigbus_handler.o 00:02:08.740 CC lib/env_dpdk/pci_dpdk.o 00:02:08.740 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:08.740 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:09.000 LIB libspdk_conf.a 00:02:09.000 LIB libspdk_rdma_provider.a 00:02:09.000 SO libspdk_conf.so.6.0 00:02:09.000 SO libspdk_rdma_provider.so.6.0 00:02:09.000 LIB libspdk_json.a 00:02:09.000 LIB libspdk_rdma_utils.a 00:02:09.000 SYMLINK libspdk_conf.so 00:02:09.000 SO libspdk_json.so.6.0 00:02:09.000 SO libspdk_rdma_utils.so.1.0 00:02:09.000 SYMLINK libspdk_rdma_provider.so 00:02:09.000 SYMLINK libspdk_json.so 00:02:09.000 SYMLINK libspdk_rdma_utils.so 00:02:09.261 LIB libspdk_idxd.a 00:02:09.261 SO libspdk_idxd.so.12.0 00:02:09.261 LIB libspdk_vmd.a 00:02:09.261 SO libspdk_vmd.so.6.0 00:02:09.261 SYMLINK libspdk_idxd.so 00:02:09.261 SYMLINK libspdk_vmd.so 00:02:09.522 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:09.522 CC lib/jsonrpc/jsonrpc_server.o 00:02:09.522 CC lib/jsonrpc/jsonrpc_client.o 00:02:09.522 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:09.783 LIB libspdk_jsonrpc.a 00:02:09.783 SO libspdk_jsonrpc.so.6.0 00:02:09.783 SYMLINK libspdk_jsonrpc.so 00:02:09.783 LIB libspdk_env_dpdk.a 00:02:10.044 SO libspdk_env_dpdk.so.15.0 00:02:10.044 SYMLINK libspdk_env_dpdk.so 00:02:10.044 CC lib/rpc/rpc.o 00:02:10.304 LIB libspdk_rpc.a 00:02:10.304 SO libspdk_rpc.so.6.0 00:02:10.565 SYMLINK libspdk_rpc.so 00:02:10.826 CC lib/keyring/keyring_rpc.o 00:02:10.826 CC lib/keyring/keyring.o 00:02:10.826 CC lib/trace/trace.o 00:02:10.826 CC lib/trace/trace_flags.o 00:02:10.826 CC lib/notify/notify.o 00:02:10.826 CC lib/trace/trace_rpc.o 00:02:10.826 CC lib/notify/notify_rpc.o 00:02:11.087 LIB libspdk_notify.a 00:02:11.087 LIB libspdk_keyring.a 00:02:11.087 SO libspdk_notify.so.6.0 00:02:11.087 SO libspdk_keyring.so.1.0 00:02:11.087 LIB libspdk_trace.a 00:02:11.087 SYMLINK libspdk_notify.so 00:02:11.087 SO libspdk_trace.so.10.0 00:02:11.087 SYMLINK libspdk_keyring.so 00:02:11.087 SYMLINK libspdk_trace.so 00:02:11.659 CC lib/sock/sock.o 00:02:11.659 CC lib/sock/sock_rpc.o 00:02:11.659 CC lib/thread/thread.o 00:02:11.659 CC lib/thread/iobuf.o 00:02:11.961 LIB libspdk_sock.a 00:02:11.961 SO libspdk_sock.so.10.0 00:02:11.961 SYMLINK libspdk_sock.so 00:02:12.225 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:12.484 CC lib/nvme/nvme_ctrlr.o 00:02:12.484 CC lib/nvme/nvme_fabric.o 00:02:12.484 CC lib/nvme/nvme_ns_cmd.o 00:02:12.484 CC lib/nvme/nvme_ns.o 00:02:12.484 CC lib/nvme/nvme_pcie_common.o 00:02:12.484 CC lib/nvme/nvme_pcie.o 00:02:12.484 CC lib/nvme/nvme_qpair.o 00:02:12.484 CC lib/nvme/nvme.o 00:02:12.484 CC lib/nvme/nvme_quirks.o 00:02:12.484 CC lib/nvme/nvme_transport.o 00:02:12.484 CC lib/nvme/nvme_discovery.o 00:02:12.484 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:12.484 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:12.484 CC lib/nvme/nvme_tcp.o 00:02:12.484 CC lib/nvme/nvme_opal.o 00:02:12.484 CC lib/nvme/nvme_io_msg.o 00:02:12.484 CC lib/nvme/nvme_poll_group.o 00:02:12.484 CC lib/nvme/nvme_zns.o 00:02:12.484 CC lib/nvme/nvme_stubs.o 00:02:12.484 CC lib/nvme/nvme_auth.o 00:02:12.484 CC lib/nvme/nvme_cuse.o 00:02:12.484 CC lib/nvme/nvme_vfio_user.o 00:02:12.484 CC lib/nvme/nvme_rdma.o 00:02:12.744 LIB libspdk_thread.a 00:02:12.744 SO libspdk_thread.so.10.1 00:02:13.005 SYMLINK libspdk_thread.so 00:02:13.266 CC lib/blob/blobstore.o 00:02:13.266 CC lib/blob/request.o 00:02:13.266 CC lib/blob/zeroes.o 00:02:13.266 CC lib/blob/blob_bs_dev.o 00:02:13.266 CC lib/virtio/virtio.o 00:02:13.266 CC lib/virtio/virtio_vhost_user.o 00:02:13.266 CC lib/virtio/virtio_vfio_user.o 00:02:13.266 CC lib/init/json_config.o 00:02:13.266 CC lib/virtio/virtio_pci.o 00:02:13.266 CC lib/init/subsystem.o 00:02:13.266 CC lib/init/subsystem_rpc.o 00:02:13.266 CC lib/init/rpc.o 00:02:13.266 CC lib/accel/accel.o 00:02:13.266 CC lib/accel/accel_rpc.o 00:02:13.266 CC lib/accel/accel_sw.o 00:02:13.266 CC lib/vfu_tgt/tgt_endpoint.o 00:02:13.266 CC lib/vfu_tgt/tgt_rpc.o 00:02:13.527 LIB libspdk_init.a 00:02:13.527 SO libspdk_init.so.5.0 00:02:13.527 LIB libspdk_virtio.a 00:02:13.527 LIB libspdk_vfu_tgt.a 00:02:13.787 SO libspdk_virtio.so.7.0 00:02:13.787 SO libspdk_vfu_tgt.so.3.0 00:02:13.787 SYMLINK libspdk_init.so 00:02:13.787 SYMLINK libspdk_virtio.so 00:02:13.787 SYMLINK libspdk_vfu_tgt.so 00:02:14.048 CC lib/event/app.o 00:02:14.048 CC lib/event/reactor.o 00:02:14.048 CC lib/event/log_rpc.o 00:02:14.048 CC lib/event/scheduler_static.o 00:02:14.048 CC lib/event/app_rpc.o 00:02:14.308 LIB libspdk_accel.a 00:02:14.308 SO libspdk_accel.so.16.0 00:02:14.308 LIB libspdk_nvme.a 00:02:14.308 SYMLINK libspdk_accel.so 00:02:14.308 SO libspdk_nvme.so.13.1 00:02:14.308 LIB libspdk_event.a 00:02:14.569 SO libspdk_event.so.14.0 00:02:14.569 SYMLINK libspdk_event.so 00:02:14.569 CC lib/bdev/bdev.o 00:02:14.569 CC lib/bdev/bdev_rpc.o 00:02:14.569 CC lib/bdev/bdev_zone.o 00:02:14.569 CC lib/bdev/part.o 00:02:14.569 CC lib/bdev/scsi_nvme.o 00:02:14.831 SYMLINK libspdk_nvme.so 00:02:15.774 LIB libspdk_blob.a 00:02:15.774 SO libspdk_blob.so.11.0 00:02:16.034 SYMLINK libspdk_blob.so 00:02:16.295 CC lib/blobfs/tree.o 00:02:16.295 CC lib/blobfs/blobfs.o 00:02:16.295 CC lib/lvol/lvol.o 00:02:16.867 LIB libspdk_bdev.a 00:02:16.867 SO libspdk_bdev.so.16.0 00:02:17.127 LIB libspdk_blobfs.a 00:02:17.127 SO libspdk_blobfs.so.10.0 00:02:17.127 SYMLINK libspdk_bdev.so 00:02:17.127 SYMLINK libspdk_blobfs.so 00:02:17.127 LIB libspdk_lvol.a 00:02:17.127 SO libspdk_lvol.so.10.0 00:02:17.389 SYMLINK libspdk_lvol.so 00:02:17.389 CC lib/scsi/dev.o 00:02:17.389 CC lib/scsi/lun.o 00:02:17.389 CC lib/scsi/port.o 00:02:17.389 CC lib/scsi/scsi.o 00:02:17.389 CC lib/scsi/scsi_bdev.o 00:02:17.389 CC lib/scsi/scsi_pr.o 00:02:17.389 CC lib/nbd/nbd.o 00:02:17.389 CC lib/scsi/scsi_rpc.o 00:02:17.389 CC lib/nbd/nbd_rpc.o 00:02:17.389 CC lib/scsi/task.o 00:02:17.389 CC lib/nvmf/ctrlr.o 00:02:17.389 CC lib/nvmf/ctrlr_discovery.o 00:02:17.389 CC lib/nvmf/ctrlr_bdev.o 00:02:17.389 CC lib/ublk/ublk.o 00:02:17.389 CC lib/ublk/ublk_rpc.o 00:02:17.389 CC lib/nvmf/subsystem.o 00:02:17.389 CC lib/nvmf/nvmf.o 00:02:17.389 CC lib/nvmf/nvmf_rpc.o 00:02:17.389 CC lib/ftl/ftl_core.o 00:02:17.389 CC lib/nvmf/transport.o 00:02:17.389 CC lib/ftl/ftl_init.o 00:02:17.389 CC lib/nvmf/tcp.o 00:02:17.389 CC lib/ftl/ftl_layout.o 00:02:17.389 CC lib/nvmf/stubs.o 00:02:17.389 CC lib/ftl/ftl_debug.o 00:02:17.389 CC lib/nvmf/mdns_server.o 00:02:17.389 CC lib/nvmf/vfio_user.o 00:02:17.389 CC lib/ftl/ftl_io.o 00:02:17.389 CC lib/nvmf/rdma.o 00:02:17.389 CC lib/ftl/ftl_sb.o 00:02:17.389 CC lib/nvmf/auth.o 00:02:17.389 CC lib/ftl/ftl_l2p.o 00:02:17.389 CC lib/ftl/ftl_nv_cache.o 00:02:17.389 CC lib/ftl/ftl_l2p_flat.o 00:02:17.389 CC lib/ftl/ftl_band.o 00:02:17.389 CC lib/ftl/ftl_band_ops.o 00:02:17.389 CC lib/ftl/ftl_rq.o 00:02:17.389 CC lib/ftl/ftl_writer.o 00:02:17.389 CC lib/ftl/ftl_reloc.o 00:02:17.389 CC lib/ftl/ftl_l2p_cache.o 00:02:17.389 CC lib/ftl/ftl_p2l.o 00:02:17.389 CC lib/ftl/mngt/ftl_mngt.o 00:02:17.389 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:17.389 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:17.389 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:17.389 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:17.389 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:17.389 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:17.389 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:17.389 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:17.389 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:17.389 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:17.389 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:17.389 CC lib/ftl/utils/ftl_conf.o 00:02:17.389 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:17.389 CC lib/ftl/utils/ftl_md.o 00:02:17.389 CC lib/ftl/utils/ftl_mempool.o 00:02:17.389 CC lib/ftl/utils/ftl_bitmap.o 00:02:17.389 CC lib/ftl/utils/ftl_property.o 00:02:17.389 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:17.389 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:17.389 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:17.389 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:17.389 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:17.389 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:17.389 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:17.389 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:17.389 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:17.648 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:17.648 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:17.648 CC lib/ftl/base/ftl_base_dev.o 00:02:17.648 CC lib/ftl/base/ftl_base_bdev.o 00:02:17.648 CC lib/ftl/ftl_trace.o 00:02:17.909 LIB libspdk_scsi.a 00:02:17.909 LIB libspdk_nbd.a 00:02:17.909 SO libspdk_scsi.so.9.0 00:02:17.909 SO libspdk_nbd.so.7.0 00:02:17.909 SYMLINK libspdk_nbd.so 00:02:17.909 SYMLINK libspdk_scsi.so 00:02:18.170 LIB libspdk_ublk.a 00:02:18.170 SO libspdk_ublk.so.3.0 00:02:18.170 SYMLINK libspdk_ublk.so 00:02:18.430 CC lib/vhost/vhost.o 00:02:18.430 CC lib/vhost/vhost_rpc.o 00:02:18.430 CC lib/vhost/vhost_scsi.o 00:02:18.430 CC lib/vhost/vhost_blk.o 00:02:18.430 CC lib/vhost/rte_vhost_user.o 00:02:18.430 CC lib/iscsi/conn.o 00:02:18.430 CC lib/iscsi/init_grp.o 00:02:18.430 CC lib/iscsi/iscsi.o 00:02:18.430 CC lib/iscsi/md5.o 00:02:18.430 CC lib/iscsi/param.o 00:02:18.430 CC lib/iscsi/portal_grp.o 00:02:18.430 CC lib/iscsi/tgt_node.o 00:02:18.430 CC lib/iscsi/iscsi_subsystem.o 00:02:18.430 CC lib/iscsi/iscsi_rpc.o 00:02:18.430 CC lib/iscsi/task.o 00:02:18.430 LIB libspdk_ftl.a 00:02:18.690 SO libspdk_ftl.so.9.0 00:02:18.950 SYMLINK libspdk_ftl.so 00:02:19.210 LIB libspdk_nvmf.a 00:02:19.210 LIB libspdk_vhost.a 00:02:19.210 SO libspdk_nvmf.so.19.0 00:02:19.534 SO libspdk_vhost.so.8.0 00:02:19.534 SYMLINK libspdk_vhost.so 00:02:19.534 SYMLINK libspdk_nvmf.so 00:02:19.534 LIB libspdk_iscsi.a 00:02:19.534 SO libspdk_iscsi.so.8.0 00:02:19.795 SYMLINK libspdk_iscsi.so 00:02:20.367 CC module/env_dpdk/env_dpdk_rpc.o 00:02:20.367 CC module/vfu_device/vfu_virtio.o 00:02:20.367 CC module/vfu_device/vfu_virtio_blk.o 00:02:20.367 CC module/vfu_device/vfu_virtio_scsi.o 00:02:20.367 CC module/vfu_device/vfu_virtio_rpc.o 00:02:20.367 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:20.628 CC module/keyring/linux/keyring.o 00:02:20.628 CC module/keyring/linux/keyring_rpc.o 00:02:20.628 CC module/accel/dsa/accel_dsa.o 00:02:20.628 LIB libspdk_env_dpdk_rpc.a 00:02:20.628 CC module/sock/posix/posix.o 00:02:20.628 CC module/accel/dsa/accel_dsa_rpc.o 00:02:20.628 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:20.628 CC module/accel/error/accel_error.o 00:02:20.628 CC module/accel/error/accel_error_rpc.o 00:02:20.628 CC module/keyring/file/keyring.o 00:02:20.628 CC module/accel/ioat/accel_ioat.o 00:02:20.628 CC module/keyring/file/keyring_rpc.o 00:02:20.628 CC module/accel/ioat/accel_ioat_rpc.o 00:02:20.628 CC module/scheduler/gscheduler/gscheduler.o 00:02:20.628 CC module/blob/bdev/blob_bdev.o 00:02:20.628 CC module/accel/iaa/accel_iaa.o 00:02:20.628 CC module/accel/iaa/accel_iaa_rpc.o 00:02:20.628 SO libspdk_env_dpdk_rpc.so.6.0 00:02:20.628 SYMLINK libspdk_env_dpdk_rpc.so 00:02:20.628 LIB libspdk_keyring_linux.a 00:02:20.628 LIB libspdk_scheduler_dpdk_governor.a 00:02:20.628 LIB libspdk_keyring_file.a 00:02:20.628 LIB libspdk_scheduler_gscheduler.a 00:02:20.628 LIB libspdk_accel_error.a 00:02:20.628 LIB libspdk_scheduler_dynamic.a 00:02:20.628 LIB libspdk_accel_ioat.a 00:02:20.628 SO libspdk_keyring_file.so.1.0 00:02:20.628 SO libspdk_keyring_linux.so.1.0 00:02:20.628 SO libspdk_scheduler_gscheduler.so.4.0 00:02:20.628 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:20.889 SO libspdk_accel_error.so.2.0 00:02:20.889 SO libspdk_scheduler_dynamic.so.4.0 00:02:20.889 SO libspdk_accel_ioat.so.6.0 00:02:20.889 LIB libspdk_accel_iaa.a 00:02:20.889 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:20.889 LIB libspdk_accel_dsa.a 00:02:20.890 SYMLINK libspdk_scheduler_gscheduler.so 00:02:20.890 SYMLINK libspdk_keyring_file.so 00:02:20.890 SYMLINK libspdk_keyring_linux.so 00:02:20.890 SYMLINK libspdk_scheduler_dynamic.so 00:02:20.890 SO libspdk_accel_iaa.so.3.0 00:02:20.890 LIB libspdk_blob_bdev.a 00:02:20.890 SYMLINK libspdk_accel_error.so 00:02:20.890 SO libspdk_accel_dsa.so.5.0 00:02:20.890 SYMLINK libspdk_accel_ioat.so 00:02:20.890 SO libspdk_blob_bdev.so.11.0 00:02:20.890 LIB libspdk_vfu_device.a 00:02:20.890 SYMLINK libspdk_accel_iaa.so 00:02:20.890 SYMLINK libspdk_accel_dsa.so 00:02:20.890 SYMLINK libspdk_blob_bdev.so 00:02:20.890 SO libspdk_vfu_device.so.3.0 00:02:21.151 SYMLINK libspdk_vfu_device.so 00:02:21.151 LIB libspdk_sock_posix.a 00:02:21.151 SO libspdk_sock_posix.so.6.0 00:02:21.414 SYMLINK libspdk_sock_posix.so 00:02:21.414 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:21.414 CC module/bdev/delay/vbdev_delay.o 00:02:21.414 CC module/bdev/error/vbdev_error.o 00:02:21.414 CC module/bdev/error/vbdev_error_rpc.o 00:02:21.414 CC module/bdev/gpt/gpt.o 00:02:21.414 CC module/bdev/gpt/vbdev_gpt.o 00:02:21.414 CC module/bdev/malloc/bdev_malloc.o 00:02:21.414 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:21.414 CC module/bdev/split/vbdev_split.o 00:02:21.414 CC module/bdev/split/vbdev_split_rpc.o 00:02:21.414 CC module/bdev/null/bdev_null.o 00:02:21.414 CC module/bdev/null/bdev_null_rpc.o 00:02:21.414 CC module/bdev/lvol/vbdev_lvol.o 00:02:21.414 CC module/bdev/nvme/bdev_nvme.o 00:02:21.414 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:21.414 CC module/bdev/nvme/nvme_rpc.o 00:02:21.414 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:21.414 CC module/blobfs/bdev/blobfs_bdev.o 00:02:21.414 CC module/bdev/nvme/bdev_mdns_client.o 00:02:21.414 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:21.414 CC module/bdev/nvme/vbdev_opal.o 00:02:21.414 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:21.414 CC module/bdev/aio/bdev_aio.o 00:02:21.414 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:21.414 CC module/bdev/aio/bdev_aio_rpc.o 00:02:21.414 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:21.414 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:21.414 CC module/bdev/raid/bdev_raid.o 00:02:21.414 CC module/bdev/ftl/bdev_ftl.o 00:02:21.414 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:21.414 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:21.414 CC module/bdev/raid/bdev_raid_rpc.o 00:02:21.414 CC module/bdev/iscsi/bdev_iscsi.o 00:02:21.414 CC module/bdev/raid/bdev_raid_sb.o 00:02:21.414 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:21.414 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:21.414 CC module/bdev/raid/raid0.o 00:02:21.414 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:21.414 CC module/bdev/raid/raid1.o 00:02:21.414 CC module/bdev/raid/concat.o 00:02:21.414 CC module/bdev/passthru/vbdev_passthru.o 00:02:21.414 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:21.675 LIB libspdk_blobfs_bdev.a 00:02:21.675 LIB libspdk_bdev_split.a 00:02:21.675 LIB libspdk_bdev_error.a 00:02:21.675 LIB libspdk_bdev_gpt.a 00:02:21.675 SO libspdk_blobfs_bdev.so.6.0 00:02:21.675 SO libspdk_bdev_split.so.6.0 00:02:21.675 LIB libspdk_bdev_null.a 00:02:21.675 SO libspdk_bdev_error.so.6.0 00:02:21.936 SO libspdk_bdev_gpt.so.6.0 00:02:21.936 SO libspdk_bdev_null.so.6.0 00:02:21.936 SYMLINK libspdk_blobfs_bdev.so 00:02:21.936 SYMLINK libspdk_bdev_split.so 00:02:21.936 LIB libspdk_bdev_aio.a 00:02:21.936 LIB libspdk_bdev_ftl.a 00:02:21.936 SYMLINK libspdk_bdev_error.so 00:02:21.936 LIB libspdk_bdev_passthru.a 00:02:21.936 LIB libspdk_bdev_delay.a 00:02:21.936 LIB libspdk_bdev_zone_block.a 00:02:21.936 SO libspdk_bdev_ftl.so.6.0 00:02:21.936 LIB libspdk_bdev_malloc.a 00:02:21.936 SO libspdk_bdev_aio.so.6.0 00:02:21.936 SYMLINK libspdk_bdev_gpt.so 00:02:21.936 SYMLINK libspdk_bdev_null.so 00:02:21.936 SO libspdk_bdev_passthru.so.6.0 00:02:21.936 SO libspdk_bdev_delay.so.6.0 00:02:21.936 LIB libspdk_bdev_iscsi.a 00:02:21.936 SO libspdk_bdev_zone_block.so.6.0 00:02:21.936 SO libspdk_bdev_malloc.so.6.0 00:02:21.936 SYMLINK libspdk_bdev_aio.so 00:02:21.936 SYMLINK libspdk_bdev_ftl.so 00:02:21.936 SO libspdk_bdev_iscsi.so.6.0 00:02:21.936 SYMLINK libspdk_bdev_passthru.so 00:02:21.936 SYMLINK libspdk_bdev_zone_block.so 00:02:21.936 SYMLINK libspdk_bdev_delay.so 00:02:21.936 LIB libspdk_bdev_virtio.a 00:02:21.936 SYMLINK libspdk_bdev_malloc.so 00:02:21.936 LIB libspdk_bdev_lvol.a 00:02:21.936 SYMLINK libspdk_bdev_iscsi.so 00:02:21.936 SO libspdk_bdev_virtio.so.6.0 00:02:21.936 SO libspdk_bdev_lvol.so.6.0 00:02:22.198 SYMLINK libspdk_bdev_lvol.so 00:02:22.198 SYMLINK libspdk_bdev_virtio.so 00:02:22.459 LIB libspdk_bdev_raid.a 00:02:22.459 SO libspdk_bdev_raid.so.6.0 00:02:22.459 SYMLINK libspdk_bdev_raid.so 00:02:23.403 LIB libspdk_bdev_nvme.a 00:02:23.403 SO libspdk_bdev_nvme.so.7.0 00:02:23.664 SYMLINK libspdk_bdev_nvme.so 00:02:24.236 CC module/event/subsystems/scheduler/scheduler.o 00:02:24.236 CC module/event/subsystems/iobuf/iobuf.o 00:02:24.236 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:24.236 CC module/event/subsystems/vmd/vmd.o 00:02:24.236 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:24.236 CC module/event/subsystems/sock/sock.o 00:02:24.236 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:24.236 CC module/event/subsystems/keyring/keyring.o 00:02:24.236 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:24.497 LIB libspdk_event_scheduler.a 00:02:24.497 LIB libspdk_event_vhost_blk.a 00:02:24.497 LIB libspdk_event_vmd.a 00:02:24.497 LIB libspdk_event_keyring.a 00:02:24.497 SO libspdk_event_scheduler.so.4.0 00:02:24.497 LIB libspdk_event_sock.a 00:02:24.497 LIB libspdk_event_iobuf.a 00:02:24.497 LIB libspdk_event_vfu_tgt.a 00:02:24.497 SO libspdk_event_vhost_blk.so.3.0 00:02:24.497 SO libspdk_event_keyring.so.1.0 00:02:24.497 SO libspdk_event_vmd.so.6.0 00:02:24.497 SO libspdk_event_sock.so.5.0 00:02:24.497 SO libspdk_event_vfu_tgt.so.3.0 00:02:24.497 SO libspdk_event_iobuf.so.3.0 00:02:24.497 SYMLINK libspdk_event_scheduler.so 00:02:24.497 SYMLINK libspdk_event_vhost_blk.so 00:02:24.497 SYMLINK libspdk_event_keyring.so 00:02:24.497 SYMLINK libspdk_event_vmd.so 00:02:24.497 SYMLINK libspdk_event_sock.so 00:02:24.497 SYMLINK libspdk_event_vfu_tgt.so 00:02:24.759 SYMLINK libspdk_event_iobuf.so 00:02:25.020 CC module/event/subsystems/accel/accel.o 00:02:25.020 LIB libspdk_event_accel.a 00:02:25.281 SO libspdk_event_accel.so.6.0 00:02:25.281 SYMLINK libspdk_event_accel.so 00:02:25.542 CC module/event/subsystems/bdev/bdev.o 00:02:25.802 LIB libspdk_event_bdev.a 00:02:25.802 SO libspdk_event_bdev.so.6.0 00:02:25.802 SYMLINK libspdk_event_bdev.so 00:02:26.422 CC module/event/subsystems/nbd/nbd.o 00:02:26.422 CC module/event/subsystems/scsi/scsi.o 00:02:26.422 CC module/event/subsystems/ublk/ublk.o 00:02:26.422 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:26.422 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:26.422 LIB libspdk_event_nbd.a 00:02:26.422 LIB libspdk_event_ublk.a 00:02:26.422 LIB libspdk_event_scsi.a 00:02:26.422 SO libspdk_event_nbd.so.6.0 00:02:26.422 SO libspdk_event_ublk.so.3.0 00:02:26.422 SO libspdk_event_scsi.so.6.0 00:02:26.422 LIB libspdk_event_nvmf.a 00:02:26.422 SYMLINK libspdk_event_nbd.so 00:02:26.422 SYMLINK libspdk_event_ublk.so 00:02:26.422 SYMLINK libspdk_event_scsi.so 00:02:26.422 SO libspdk_event_nvmf.so.6.0 00:02:26.683 SYMLINK libspdk_event_nvmf.so 00:02:26.944 CC module/event/subsystems/iscsi/iscsi.o 00:02:26.944 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:26.944 LIB libspdk_event_vhost_scsi.a 00:02:26.944 LIB libspdk_event_iscsi.a 00:02:26.944 SO libspdk_event_vhost_scsi.so.3.0 00:02:27.206 SO libspdk_event_iscsi.so.6.0 00:02:27.206 SYMLINK libspdk_event_vhost_scsi.so 00:02:27.206 SYMLINK libspdk_event_iscsi.so 00:02:27.469 SO libspdk.so.6.0 00:02:27.469 SYMLINK libspdk.so 00:02:27.728 TEST_HEADER include/spdk/accel.h 00:02:27.728 TEST_HEADER include/spdk/accel_module.h 00:02:27.728 TEST_HEADER include/spdk/assert.h 00:02:27.728 TEST_HEADER include/spdk/barrier.h 00:02:27.728 TEST_HEADER include/spdk/base64.h 00:02:27.728 TEST_HEADER include/spdk/bdev_zone.h 00:02:27.728 TEST_HEADER include/spdk/bdev.h 00:02:27.728 TEST_HEADER include/spdk/bdev_module.h 00:02:27.728 TEST_HEADER include/spdk/bit_pool.h 00:02:27.728 TEST_HEADER include/spdk/bit_array.h 00:02:27.728 TEST_HEADER include/spdk/blob_bdev.h 00:02:27.728 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:27.728 TEST_HEADER include/spdk/conf.h 00:02:27.728 TEST_HEADER include/spdk/blobfs.h 00:02:27.728 TEST_HEADER include/spdk/blob.h 00:02:27.728 TEST_HEADER include/spdk/config.h 00:02:27.728 TEST_HEADER include/spdk/crc16.h 00:02:27.728 TEST_HEADER include/spdk/cpuset.h 00:02:27.728 TEST_HEADER include/spdk/crc32.h 00:02:27.728 TEST_HEADER include/spdk/crc64.h 00:02:27.728 TEST_HEADER include/spdk/dif.h 00:02:27.728 TEST_HEADER include/spdk/dma.h 00:02:27.729 TEST_HEADER include/spdk/endian.h 00:02:27.729 TEST_HEADER include/spdk/env_dpdk.h 00:02:27.729 TEST_HEADER include/spdk/event.h 00:02:27.729 TEST_HEADER include/spdk/env.h 00:02:27.729 TEST_HEADER include/spdk/fd_group.h 00:02:27.729 CC test/rpc_client/rpc_client_test.o 00:02:27.729 TEST_HEADER include/spdk/fd.h 00:02:27.729 TEST_HEADER include/spdk/file.h 00:02:27.729 CXX app/trace/trace.o 00:02:27.729 CC app/trace_record/trace_record.o 00:02:27.729 TEST_HEADER include/spdk/ftl.h 00:02:27.729 TEST_HEADER include/spdk/gpt_spec.h 00:02:27.729 TEST_HEADER include/spdk/hexlify.h 00:02:27.729 CC app/spdk_nvme_discover/discovery_aer.o 00:02:27.729 TEST_HEADER include/spdk/histogram_data.h 00:02:27.729 TEST_HEADER include/spdk/idxd.h 00:02:27.729 TEST_HEADER include/spdk/idxd_spec.h 00:02:27.729 CC app/spdk_top/spdk_top.o 00:02:27.729 TEST_HEADER include/spdk/init.h 00:02:27.729 CC app/spdk_nvme_perf/perf.o 00:02:27.729 TEST_HEADER include/spdk/ioat.h 00:02:27.729 TEST_HEADER include/spdk/ioat_spec.h 00:02:27.729 CC app/spdk_lspci/spdk_lspci.o 00:02:27.729 TEST_HEADER include/spdk/iscsi_spec.h 00:02:27.729 TEST_HEADER include/spdk/json.h 00:02:27.729 TEST_HEADER include/spdk/jsonrpc.h 00:02:27.729 TEST_HEADER include/spdk/keyring.h 00:02:27.729 TEST_HEADER include/spdk/keyring_module.h 00:02:27.729 CC app/spdk_nvme_identify/identify.o 00:02:27.729 TEST_HEADER include/spdk/likely.h 00:02:27.729 TEST_HEADER include/spdk/log.h 00:02:27.729 TEST_HEADER include/spdk/lvol.h 00:02:27.729 TEST_HEADER include/spdk/memory.h 00:02:27.729 TEST_HEADER include/spdk/mmio.h 00:02:27.729 TEST_HEADER include/spdk/nbd.h 00:02:27.729 TEST_HEADER include/spdk/net.h 00:02:27.729 TEST_HEADER include/spdk/notify.h 00:02:27.729 TEST_HEADER include/spdk/nvme.h 00:02:27.729 TEST_HEADER include/spdk/nvme_intel.h 00:02:27.729 TEST_HEADER include/spdk/nvme_spec.h 00:02:27.729 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:27.729 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:27.729 TEST_HEADER include/spdk/nvme_zns.h 00:02:27.729 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:27.729 CC app/spdk_dd/spdk_dd.o 00:02:27.729 TEST_HEADER include/spdk/nvmf.h 00:02:27.729 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:27.729 TEST_HEADER include/spdk/nvmf_spec.h 00:02:27.729 TEST_HEADER include/spdk/opal.h 00:02:27.729 TEST_HEADER include/spdk/nvmf_transport.h 00:02:27.729 TEST_HEADER include/spdk/pci_ids.h 00:02:27.729 TEST_HEADER include/spdk/opal_spec.h 00:02:27.729 TEST_HEADER include/spdk/queue.h 00:02:27.729 TEST_HEADER include/spdk/pipe.h 00:02:27.729 TEST_HEADER include/spdk/reduce.h 00:02:27.729 TEST_HEADER include/spdk/rpc.h 00:02:27.729 TEST_HEADER include/spdk/scsi.h 00:02:27.729 TEST_HEADER include/spdk/scheduler.h 00:02:27.729 TEST_HEADER include/spdk/scsi_spec.h 00:02:27.729 TEST_HEADER include/spdk/stdinc.h 00:02:27.729 TEST_HEADER include/spdk/sock.h 00:02:27.729 TEST_HEADER include/spdk/string.h 00:02:27.729 TEST_HEADER include/spdk/trace.h 00:02:27.729 TEST_HEADER include/spdk/thread.h 00:02:27.729 TEST_HEADER include/spdk/trace_parser.h 00:02:27.729 TEST_HEADER include/spdk/ublk.h 00:02:27.729 TEST_HEADER include/spdk/tree.h 00:02:27.729 TEST_HEADER include/spdk/util.h 00:02:27.729 TEST_HEADER include/spdk/version.h 00:02:27.729 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:27.729 TEST_HEADER include/spdk/uuid.h 00:02:27.729 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:27.729 TEST_HEADER include/spdk/vhost.h 00:02:27.729 TEST_HEADER include/spdk/xor.h 00:02:27.729 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:27.729 TEST_HEADER include/spdk/vmd.h 00:02:27.729 TEST_HEADER include/spdk/zipf.h 00:02:27.729 CXX test/cpp_headers/accel.o 00:02:27.729 CXX test/cpp_headers/accel_module.o 00:02:27.729 CXX test/cpp_headers/assert.o 00:02:27.729 CC app/iscsi_tgt/iscsi_tgt.o 00:02:27.729 CXX test/cpp_headers/barrier.o 00:02:27.729 CXX test/cpp_headers/bdev.o 00:02:27.729 CXX test/cpp_headers/base64.o 00:02:27.729 CXX test/cpp_headers/bdev_module.o 00:02:27.729 CXX test/cpp_headers/bit_pool.o 00:02:27.729 CXX test/cpp_headers/bdev_zone.o 00:02:27.729 CXX test/cpp_headers/bit_array.o 00:02:27.729 CC app/nvmf_tgt/nvmf_main.o 00:02:27.729 CXX test/cpp_headers/blobfs.o 00:02:27.729 CXX test/cpp_headers/blob_bdev.o 00:02:27.729 CXX test/cpp_headers/blobfs_bdev.o 00:02:27.729 CXX test/cpp_headers/blob.o 00:02:27.729 CXX test/cpp_headers/conf.o 00:02:27.729 CXX test/cpp_headers/config.o 00:02:27.729 CXX test/cpp_headers/crc16.o 00:02:27.729 CXX test/cpp_headers/cpuset.o 00:02:27.729 CXX test/cpp_headers/crc32.o 00:02:27.729 CXX test/cpp_headers/dma.o 00:02:27.729 CXX test/cpp_headers/crc64.o 00:02:27.729 CXX test/cpp_headers/dif.o 00:02:27.729 CXX test/cpp_headers/env_dpdk.o 00:02:27.729 CXX test/cpp_headers/endian.o 00:02:27.729 CC app/spdk_tgt/spdk_tgt.o 00:02:27.729 CXX test/cpp_headers/env.o 00:02:27.989 CXX test/cpp_headers/fd_group.o 00:02:27.989 CXX test/cpp_headers/event.o 00:02:27.989 CXX test/cpp_headers/file.o 00:02:27.989 CXX test/cpp_headers/fd.o 00:02:27.989 CXX test/cpp_headers/ftl.o 00:02:27.989 CXX test/cpp_headers/gpt_spec.o 00:02:27.989 CXX test/cpp_headers/histogram_data.o 00:02:27.990 CXX test/cpp_headers/hexlify.o 00:02:27.990 CXX test/cpp_headers/idxd_spec.o 00:02:27.990 CXX test/cpp_headers/init.o 00:02:27.990 CXX test/cpp_headers/idxd.o 00:02:27.990 CXX test/cpp_headers/ioat.o 00:02:27.990 CXX test/cpp_headers/ioat_spec.o 00:02:27.990 CXX test/cpp_headers/iscsi_spec.o 00:02:27.990 CXX test/cpp_headers/json.o 00:02:27.990 CXX test/cpp_headers/jsonrpc.o 00:02:27.990 CXX test/cpp_headers/keyring.o 00:02:27.990 CXX test/cpp_headers/likely.o 00:02:27.990 CXX test/cpp_headers/keyring_module.o 00:02:27.990 CXX test/cpp_headers/log.o 00:02:27.990 CXX test/cpp_headers/memory.o 00:02:27.990 CXX test/cpp_headers/lvol.o 00:02:27.990 CXX test/cpp_headers/nbd.o 00:02:27.990 CXX test/cpp_headers/mmio.o 00:02:27.990 CXX test/cpp_headers/notify.o 00:02:27.990 CXX test/cpp_headers/net.o 00:02:27.990 CXX test/cpp_headers/nvme.o 00:02:27.990 CXX test/cpp_headers/nvme_ocssd.o 00:02:27.990 CXX test/cpp_headers/nvme_intel.o 00:02:27.990 CXX test/cpp_headers/nvme_zns.o 00:02:27.990 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:27.990 CXX test/cpp_headers/nvme_spec.o 00:02:27.990 CXX test/cpp_headers/nvmf_cmd.o 00:02:27.990 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:27.990 CXX test/cpp_headers/nvmf_spec.o 00:02:27.990 CXX test/cpp_headers/nvmf.o 00:02:27.990 CXX test/cpp_headers/nvmf_transport.o 00:02:27.990 CXX test/cpp_headers/opal.o 00:02:27.990 CXX test/cpp_headers/queue.o 00:02:27.990 CXX test/cpp_headers/pci_ids.o 00:02:27.990 CXX test/cpp_headers/pipe.o 00:02:27.990 CXX test/cpp_headers/opal_spec.o 00:02:27.990 CXX test/cpp_headers/reduce.o 00:02:27.990 CXX test/cpp_headers/rpc.o 00:02:27.990 CXX test/cpp_headers/scsi.o 00:02:27.990 CXX test/cpp_headers/scheduler.o 00:02:27.990 CXX test/cpp_headers/stdinc.o 00:02:27.990 CXX test/cpp_headers/sock.o 00:02:27.990 CXX test/cpp_headers/scsi_spec.o 00:02:27.990 CXX test/cpp_headers/trace.o 00:02:27.990 CXX test/cpp_headers/string.o 00:02:27.990 CXX test/cpp_headers/thread.o 00:02:27.990 CXX test/cpp_headers/trace_parser.o 00:02:27.990 CXX test/cpp_headers/uuid.o 00:02:27.990 CXX test/cpp_headers/tree.o 00:02:27.990 CXX test/cpp_headers/ublk.o 00:02:27.990 CXX test/cpp_headers/util.o 00:02:27.990 CXX test/cpp_headers/vfio_user_pci.o 00:02:27.990 CXX test/cpp_headers/version.o 00:02:27.990 CXX test/cpp_headers/vfio_user_spec.o 00:02:27.990 CXX test/cpp_headers/vhost.o 00:02:27.990 CXX test/cpp_headers/vmd.o 00:02:27.990 CXX test/cpp_headers/xor.o 00:02:27.990 CXX test/cpp_headers/zipf.o 00:02:27.990 CC test/thread/poller_perf/poller_perf.o 00:02:27.990 CC test/app/jsoncat/jsoncat.o 00:02:27.990 CC test/app/histogram_perf/histogram_perf.o 00:02:27.990 CC examples/util/zipf/zipf.o 00:02:27.990 CC test/env/memory/memory_ut.o 00:02:27.990 CC test/app/stub/stub.o 00:02:27.990 CC test/env/pci/pci_ut.o 00:02:27.990 CC examples/ioat/verify/verify.o 00:02:27.990 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:27.990 CC examples/ioat/perf/perf.o 00:02:27.990 CC test/env/vtophys/vtophys.o 00:02:28.249 LINK spdk_lspci 00:02:28.249 CC app/fio/nvme/fio_plugin.o 00:02:28.250 CC test/dma/test_dma/test_dma.o 00:02:28.250 CC test/app/bdev_svc/bdev_svc.o 00:02:28.250 CC app/fio/bdev/fio_plugin.o 00:02:28.250 LINK rpc_client_test 00:02:28.250 LINK spdk_trace_record 00:02:28.250 LINK spdk_nvme_discover 00:02:28.509 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:28.509 LINK interrupt_tgt 00:02:28.509 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:28.509 LINK nvmf_tgt 00:02:28.509 LINK poller_perf 00:02:28.509 CC test/env/mem_callbacks/mem_callbacks.o 00:02:28.509 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:28.509 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:28.509 LINK iscsi_tgt 00:02:28.509 LINK spdk_tgt 00:02:28.509 LINK stub 00:02:28.509 LINK histogram_perf 00:02:28.509 LINK jsoncat 00:02:28.509 LINK env_dpdk_post_init 00:02:28.509 LINK zipf 00:02:28.509 LINK vtophys 00:02:28.770 LINK spdk_dd 00:02:28.770 LINK ioat_perf 00:02:28.770 LINK bdev_svc 00:02:28.770 LINK verify 00:02:28.770 LINK test_dma 00:02:28.770 LINK spdk_trace 00:02:28.770 LINK pci_ut 00:02:28.770 LINK vhost_fuzz 00:02:29.033 LINK nvme_fuzz 00:02:29.033 CC test/event/reactor/reactor.o 00:02:29.033 CC test/event/event_perf/event_perf.o 00:02:29.033 CC test/event/reactor_perf/reactor_perf.o 00:02:29.033 CC test/event/app_repeat/app_repeat.o 00:02:29.033 CC test/event/scheduler/scheduler.o 00:02:29.033 LINK spdk_nvme 00:02:29.033 LINK spdk_nvme_identify 00:02:29.033 LINK mem_callbacks 00:02:29.033 LINK spdk_bdev 00:02:29.033 LINK spdk_nvme_perf 00:02:29.033 CC examples/vmd/lsvmd/lsvmd.o 00:02:29.033 CC examples/sock/hello_world/hello_sock.o 00:02:29.033 CC examples/idxd/perf/perf.o 00:02:29.033 LINK event_perf 00:02:29.033 CC examples/vmd/led/led.o 00:02:29.033 LINK reactor 00:02:29.033 LINK reactor_perf 00:02:29.294 CC examples/thread/thread/thread_ex.o 00:02:29.294 LINK spdk_top 00:02:29.294 LINK app_repeat 00:02:29.294 CC app/vhost/vhost.o 00:02:29.294 LINK scheduler 00:02:29.294 LINK lsvmd 00:02:29.294 CC test/nvme/sgl/sgl.o 00:02:29.294 CC test/nvme/startup/startup.o 00:02:29.294 CC test/nvme/overhead/overhead.o 00:02:29.294 CC test/nvme/e2edp/nvme_dp.o 00:02:29.294 CC test/nvme/err_injection/err_injection.o 00:02:29.294 LINK led 00:02:29.294 CC test/nvme/cuse/cuse.o 00:02:29.294 CC test/nvme/reset/reset.o 00:02:29.294 CC test/nvme/simple_copy/simple_copy.o 00:02:29.294 CC test/nvme/connect_stress/connect_stress.o 00:02:29.294 CC test/nvme/aer/aer.o 00:02:29.294 CC test/nvme/compliance/nvme_compliance.o 00:02:29.294 CC test/nvme/fused_ordering/fused_ordering.o 00:02:29.294 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:29.294 CC test/nvme/fdp/fdp.o 00:02:29.294 CC test/nvme/reserve/reserve.o 00:02:29.294 CC test/nvme/boot_partition/boot_partition.o 00:02:29.294 CC test/accel/dif/dif.o 00:02:29.294 LINK hello_sock 00:02:29.294 CC test/blobfs/mkfs/mkfs.o 00:02:29.555 LINK idxd_perf 00:02:29.555 LINK thread 00:02:29.555 LINK vhost 00:02:29.555 CC test/lvol/esnap/esnap.o 00:02:29.555 LINK err_injection 00:02:29.555 LINK startup 00:02:29.555 LINK connect_stress 00:02:29.555 LINK memory_ut 00:02:29.555 LINK doorbell_aers 00:02:29.555 LINK boot_partition 00:02:29.555 LINK simple_copy 00:02:29.555 LINK reserve 00:02:29.555 LINK fused_ordering 00:02:29.555 LINK nvme_compliance 00:02:29.555 LINK reset 00:02:29.555 LINK mkfs 00:02:29.555 LINK overhead 00:02:29.555 LINK sgl 00:02:29.555 LINK nvme_dp 00:02:29.555 LINK aer 00:02:29.555 LINK fdp 00:02:29.814 LINK dif 00:02:29.814 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:29.814 CC examples/nvme/hotplug/hotplug.o 00:02:29.814 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:29.814 CC examples/nvme/arbitration/arbitration.o 00:02:29.814 CC examples/nvme/abort/abort.o 00:02:29.814 CC examples/nvme/reconnect/reconnect.o 00:02:29.814 CC examples/nvme/hello_world/hello_world.o 00:02:29.814 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:29.814 LINK iscsi_fuzz 00:02:30.074 CC examples/blob/cli/blobcli.o 00:02:30.074 CC examples/blob/hello_world/hello_blob.o 00:02:30.074 CC examples/accel/perf/accel_perf.o 00:02:30.074 LINK pmr_persistence 00:02:30.074 LINK cmb_copy 00:02:30.074 LINK hotplug 00:02:30.074 LINK hello_world 00:02:30.074 LINK reconnect 00:02:30.074 LINK arbitration 00:02:30.335 LINK abort 00:02:30.335 LINK hello_blob 00:02:30.335 LINK nvme_manage 00:02:30.335 CC test/bdev/bdevio/bdevio.o 00:02:30.335 LINK accel_perf 00:02:30.596 LINK blobcli 00:02:30.596 LINK cuse 00:02:30.857 LINK bdevio 00:02:31.117 CC examples/bdev/bdevperf/bdevperf.o 00:02:31.117 CC examples/bdev/hello_world/hello_bdev.o 00:02:31.378 LINK hello_bdev 00:02:31.639 LINK bdevperf 00:02:32.213 CC examples/nvmf/nvmf/nvmf.o 00:02:32.784 LINK nvmf 00:02:33.727 LINK esnap 00:02:33.989 00:02:33.989 real 0m51.385s 00:02:33.989 user 6m32.305s 00:02:33.989 sys 4m13.288s 00:02:33.989 19:42:21 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:33.989 19:42:21 make -- common/autotest_common.sh@10 -- $ set +x 00:02:33.989 ************************************ 00:02:33.989 END TEST make 00:02:33.989 ************************************ 00:02:34.251 19:42:21 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:34.251 19:42:21 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:34.251 19:42:21 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:34.251 19:42:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.251 19:42:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:34.251 19:42:21 -- pm/common@44 -- $ pid=3336242 00:02:34.251 19:42:21 -- pm/common@50 -- $ kill -TERM 3336242 00:02:34.251 19:42:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.251 19:42:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:34.251 19:42:21 -- pm/common@44 -- $ pid=3336243 00:02:34.251 19:42:21 -- pm/common@50 -- $ kill -TERM 3336243 00:02:34.251 19:42:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.251 19:42:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:34.251 19:42:21 -- pm/common@44 -- $ pid=3336245 00:02:34.251 19:42:21 -- pm/common@50 -- $ kill -TERM 3336245 00:02:34.251 19:42:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.251 19:42:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:34.251 19:42:21 -- pm/common@44 -- $ pid=3336268 00:02:34.251 19:42:21 -- pm/common@50 -- $ sudo -E kill -TERM 3336268 00:02:34.251 19:42:22 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:34.251 19:42:22 -- nvmf/common.sh@7 -- # uname -s 00:02:34.251 19:42:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:34.251 19:42:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:34.251 19:42:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:34.251 19:42:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:34.251 19:42:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:34.251 19:42:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:34.251 19:42:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:34.251 19:42:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:34.251 19:42:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:34.251 19:42:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:34.251 19:42:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:34.251 19:42:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:34.251 19:42:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:34.251 19:42:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:34.251 19:42:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:34.251 19:42:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:34.251 19:42:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:34.251 19:42:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:34.251 19:42:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:34.251 19:42:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:34.251 19:42:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:34.251 19:42:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:34.251 19:42:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:34.251 19:42:22 -- paths/export.sh@5 -- # export PATH 00:02:34.251 19:42:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:34.251 19:42:22 -- nvmf/common.sh@47 -- # : 0 00:02:34.251 19:42:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:34.251 19:42:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:34.251 19:42:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:34.251 19:42:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:34.251 19:42:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:34.251 19:42:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:34.251 19:42:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:34.251 19:42:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:34.251 19:42:22 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:34.251 19:42:22 -- spdk/autotest.sh@32 -- # uname -s 00:02:34.251 19:42:22 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:34.251 19:42:22 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:34.251 19:42:22 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:34.251 19:42:22 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:34.251 19:42:22 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:34.251 19:42:22 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:34.251 19:42:22 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:34.251 19:42:22 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:34.251 19:42:22 -- spdk/autotest.sh@48 -- # udevadm_pid=3399912 00:02:34.251 19:42:22 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:34.251 19:42:22 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:34.251 19:42:22 -- pm/common@17 -- # local monitor 00:02:34.251 19:42:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.251 19:42:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.251 19:42:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.251 19:42:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.251 19:42:22 -- pm/common@21 -- # date +%s 00:02:34.251 19:42:22 -- pm/common@21 -- # date +%s 00:02:34.251 19:42:22 -- pm/common@25 -- # sleep 1 00:02:34.251 19:42:22 -- pm/common@21 -- # date +%s 00:02:34.251 19:42:22 -- pm/common@21 -- # date +%s 00:02:34.251 19:42:22 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721842942 00:02:34.252 19:42:22 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721842942 00:02:34.252 19:42:22 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721842942 00:02:34.252 19:42:22 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721842942 00:02:34.513 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721842942_collect-vmstat.pm.log 00:02:34.513 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721842942_collect-cpu-load.pm.log 00:02:34.513 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721842942_collect-cpu-temp.pm.log 00:02:34.513 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721842942_collect-bmc-pm.bmc.pm.log 00:02:35.456 19:42:23 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:35.456 19:42:23 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:35.456 19:42:23 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:35.456 19:42:23 -- common/autotest_common.sh@10 -- # set +x 00:02:35.456 19:42:23 -- spdk/autotest.sh@59 -- # create_test_list 00:02:35.456 19:42:23 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:35.456 19:42:23 -- common/autotest_common.sh@10 -- # set +x 00:02:35.456 19:42:23 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:35.456 19:42:23 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:35.456 19:42:23 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:35.456 19:42:23 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:35.456 19:42:23 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:35.456 19:42:23 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:35.456 19:42:23 -- common/autotest_common.sh@1455 -- # uname 00:02:35.456 19:42:23 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:35.456 19:42:23 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:35.456 19:42:23 -- common/autotest_common.sh@1475 -- # uname 00:02:35.456 19:42:23 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:35.456 19:42:23 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:35.456 19:42:23 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:35.456 19:42:23 -- spdk/autotest.sh@72 -- # hash lcov 00:02:35.456 19:42:23 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:35.456 19:42:23 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:35.456 --rc lcov_branch_coverage=1 00:02:35.456 --rc lcov_function_coverage=1 00:02:35.456 --rc genhtml_branch_coverage=1 00:02:35.456 --rc genhtml_function_coverage=1 00:02:35.456 --rc genhtml_legend=1 00:02:35.456 --rc geninfo_all_blocks=1 00:02:35.456 ' 00:02:35.456 19:42:23 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:35.456 --rc lcov_branch_coverage=1 00:02:35.456 --rc lcov_function_coverage=1 00:02:35.456 --rc genhtml_branch_coverage=1 00:02:35.456 --rc genhtml_function_coverage=1 00:02:35.456 --rc genhtml_legend=1 00:02:35.456 --rc geninfo_all_blocks=1 00:02:35.456 ' 00:02:35.456 19:42:23 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:35.456 --rc lcov_branch_coverage=1 00:02:35.456 --rc lcov_function_coverage=1 00:02:35.456 --rc genhtml_branch_coverage=1 00:02:35.456 --rc genhtml_function_coverage=1 00:02:35.456 --rc genhtml_legend=1 00:02:35.456 --rc geninfo_all_blocks=1 00:02:35.456 --no-external' 00:02:35.456 19:42:23 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:35.456 --rc lcov_branch_coverage=1 00:02:35.456 --rc lcov_function_coverage=1 00:02:35.456 --rc genhtml_branch_coverage=1 00:02:35.456 --rc genhtml_function_coverage=1 00:02:35.456 --rc genhtml_legend=1 00:02:35.456 --rc geninfo_all_blocks=1 00:02:35.456 --no-external' 00:02:35.456 19:42:23 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:35.456 lcov: LCOV version 1.14 00:02:35.456 19:42:23 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:50.370 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:50.370 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:00.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:00.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:00.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:00.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:00.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:00.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:00.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:00.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:00.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:00.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:00.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:00.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:00.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:00.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:00.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:00.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:00.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:00.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:00.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:00.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:00.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:00.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:00.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:00.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:00.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:00.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:00.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:00.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:00.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:00.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:00.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:00.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:00.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:00.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:00.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:00.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:00.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:00.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:00.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:00.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:00.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:00.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:00.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:00.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:00.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:00.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:00.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:00.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:00.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:00.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:00.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:00.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:00.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:00.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:00.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:00.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:00.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:00.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:00.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:00.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:00.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:00.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:00.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:00.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:00.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:00.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:00.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:00.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:00.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:00.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:00.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:00.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:00.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:00.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:00.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:00.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:00.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:04.629 19:42:52 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:04.629 19:42:52 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:04.629 19:42:52 -- common/autotest_common.sh@10 -- # set +x 00:03:04.629 19:42:52 -- spdk/autotest.sh@91 -- # rm -f 00:03:04.629 19:42:52 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:07.934 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:07.934 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:07.934 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:07.934 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:07.934 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:07.934 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:07.934 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:07.934 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:07.934 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:07.934 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:07.934 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:07.934 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:08.195 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:08.195 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:08.195 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:08.195 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:08.195 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:08.456 19:42:56 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:08.456 19:42:56 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:08.456 19:42:56 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:08.456 19:42:56 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:08.456 19:42:56 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:08.456 19:42:56 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:08.456 19:42:56 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:08.456 19:42:56 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:08.456 19:42:56 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:08.456 19:42:56 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:08.456 19:42:56 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:08.456 19:42:56 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:08.456 19:42:56 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:08.456 19:42:56 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:08.456 19:42:56 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:08.456 No valid GPT data, bailing 00:03:08.456 19:42:56 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:08.456 19:42:56 -- scripts/common.sh@391 -- # pt= 00:03:08.456 19:42:56 -- scripts/common.sh@392 -- # return 1 00:03:08.456 19:42:56 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:08.456 1+0 records in 00:03:08.456 1+0 records out 00:03:08.456 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00401184 s, 261 MB/s 00:03:08.456 19:42:56 -- spdk/autotest.sh@118 -- # sync 00:03:08.456 19:42:56 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:08.456 19:42:56 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:08.456 19:42:56 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:16.600 19:43:04 -- spdk/autotest.sh@124 -- # uname -s 00:03:16.600 19:43:04 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:16.600 19:43:04 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:16.600 19:43:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:16.600 19:43:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:16.600 19:43:04 -- common/autotest_common.sh@10 -- # set +x 00:03:16.600 ************************************ 00:03:16.600 START TEST setup.sh 00:03:16.600 ************************************ 00:03:16.600 19:43:04 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:16.600 * Looking for test storage... 00:03:16.600 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:16.600 19:43:04 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:16.600 19:43:04 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:16.600 19:43:04 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:16.600 19:43:04 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:16.600 19:43:04 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:16.600 19:43:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:16.600 ************************************ 00:03:16.600 START TEST acl 00:03:16.600 ************************************ 00:03:16.600 19:43:04 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:16.861 * Looking for test storage... 00:03:16.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:16.861 19:43:04 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:16.861 19:43:04 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:16.861 19:43:04 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:16.861 19:43:04 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:16.861 19:43:04 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:16.861 19:43:04 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:16.861 19:43:04 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:16.861 19:43:04 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:16.861 19:43:04 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:16.861 19:43:04 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:16.862 19:43:04 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:16.862 19:43:04 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:16.862 19:43:04 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:16.862 19:43:04 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:16.862 19:43:04 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:16.862 19:43:04 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:21.070 19:43:08 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:21.070 19:43:08 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:21.070 19:43:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.070 19:43:08 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:21.070 19:43:08 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.070 19:43:08 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:24.375 Hugepages 00:03:24.375 node hugesize free / total 00:03:24.375 19:43:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:24.375 19:43:11 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:24.375 19:43:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.375 19:43:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:24.375 19:43:11 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:24.375 19:43:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.375 19:43:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:24.375 19:43:11 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:24.375 19:43:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.375 00:03:24.375 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:24.375 19:43:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:24.375 19:43:11 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:24.375 19:43:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.375 19:43:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.376 19:43:12 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:24.376 19:43:12 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.376 19:43:12 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.376 19:43:12 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.376 19:43:12 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:24.376 19:43:12 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:24.376 19:43:12 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:24.376 19:43:12 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:24.376 19:43:12 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:24.376 ************************************ 00:03:24.376 START TEST denied 00:03:24.376 ************************************ 00:03:24.376 19:43:12 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:03:24.376 19:43:12 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:24.376 19:43:12 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:24.376 19:43:12 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:24.376 19:43:12 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.376 19:43:12 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:28.689 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:28.689 19:43:16 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:28.690 19:43:16 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:28.690 19:43:16 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:28.690 19:43:16 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:28.690 19:43:16 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:28.690 19:43:16 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:28.690 19:43:16 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:28.690 19:43:16 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:28.690 19:43:16 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:28.690 19:43:16 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:32.900 00:03:32.900 real 0m8.592s 00:03:32.900 user 0m2.856s 00:03:32.900 sys 0m5.027s 00:03:32.900 19:43:20 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:32.900 19:43:20 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:32.900 ************************************ 00:03:32.900 END TEST denied 00:03:32.900 ************************************ 00:03:33.161 19:43:20 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:33.161 19:43:20 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:33.161 19:43:20 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:33.161 19:43:20 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:33.161 ************************************ 00:03:33.161 START TEST allowed 00:03:33.161 ************************************ 00:03:33.161 19:43:20 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:03:33.161 19:43:20 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:33.161 19:43:20 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:33.161 19:43:20 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:33.161 19:43:20 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.161 19:43:20 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:39.751 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:39.751 19:43:26 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:39.751 19:43:26 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:39.751 19:43:26 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:39.751 19:43:26 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:39.751 19:43:26 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:42.301 00:03:42.301 real 0m9.189s 00:03:42.301 user 0m2.721s 00:03:42.301 sys 0m4.650s 00:03:42.301 19:43:30 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:42.301 19:43:30 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:42.301 ************************************ 00:03:42.301 END TEST allowed 00:03:42.301 ************************************ 00:03:42.301 00:03:42.301 real 0m25.630s 00:03:42.301 user 0m8.441s 00:03:42.301 sys 0m14.867s 00:03:42.301 19:43:30 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:42.301 19:43:30 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:42.301 ************************************ 00:03:42.301 END TEST acl 00:03:42.301 ************************************ 00:03:42.301 19:43:30 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:42.302 19:43:30 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:42.302 19:43:30 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:42.302 19:43:30 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:42.302 ************************************ 00:03:42.302 START TEST hugepages 00:03:42.302 ************************************ 00:03:42.302 19:43:30 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:42.565 * Looking for test storage... 00:03:42.565 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 102590044 kB' 'MemAvailable: 106277028 kB' 'Buffers: 2704 kB' 'Cached: 14724236 kB' 'SwapCached: 0 kB' 'Active: 11572640 kB' 'Inactive: 3688584 kB' 'Active(anon): 11092840 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3688584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538364 kB' 'Mapped: 209872 kB' 'Shmem: 10558556 kB' 'KReclaimable: 559544 kB' 'Slab: 1440012 kB' 'SReclaimable: 559544 kB' 'SUnreclaim: 880468 kB' 'KernelStack: 27184 kB' 'PageTables: 9140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460872 kB' 'Committed_AS: 12678660 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235720 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4568436 kB' 'DirectMap2M: 29714432 kB' 'DirectMap1G: 101711872 kB' 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.565 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.566 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:42.567 19:43:30 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:42.567 19:43:30 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:42.567 19:43:30 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:42.567 19:43:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:42.567 ************************************ 00:03:42.567 START TEST default_setup 00:03:42.567 ************************************ 00:03:42.567 19:43:30 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:03:42.568 19:43:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:42.568 19:43:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:42.568 19:43:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:42.568 19:43:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:42.568 19:43:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:42.568 19:43:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:42.568 19:43:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:42.568 19:43:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:42.568 19:43:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:42.568 19:43:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:42.568 19:43:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:42.568 19:43:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:42.568 19:43:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:42.568 19:43:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:42.568 19:43:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:42.568 19:43:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:42.568 19:43:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:42.568 19:43:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:42.568 19:43:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:42.568 19:43:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:42.568 19:43:30 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.568 19:43:30 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:45.873 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:45.873 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:45.873 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:45.873 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:45.873 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:45.873 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:45.873 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:45.873 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:45.873 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:45.873 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:45.873 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:46.134 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:46.134 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:46.134 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:46.134 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:46.134 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:46.134 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:46.399 19:43:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:46.399 19:43:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:46.399 19:43:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:46.399 19:43:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:46.399 19:43:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:46.399 19:43:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:46.399 19:43:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:46.399 19:43:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:46.399 19:43:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:46.399 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:46.399 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:46.399 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:46.399 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:46.399 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.399 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.399 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.399 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.399 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.399 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.399 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.399 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104737296 kB' 'MemAvailable: 108424280 kB' 'Buffers: 2704 kB' 'Cached: 14724368 kB' 'SwapCached: 0 kB' 'Active: 11589752 kB' 'Inactive: 3688584 kB' 'Active(anon): 11109952 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3688584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554256 kB' 'Mapped: 210176 kB' 'Shmem: 10558688 kB' 'KReclaimable: 559544 kB' 'Slab: 1437788 kB' 'SReclaimable: 559544 kB' 'SUnreclaim: 878244 kB' 'KernelStack: 27264 kB' 'PageTables: 9016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12680672 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235832 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4568436 kB' 'DirectMap2M: 29714432 kB' 'DirectMap1G: 101711872 kB' 00:03:46.399 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.399 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.399 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.400 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104737200 kB' 'MemAvailable: 108424184 kB' 'Buffers: 2704 kB' 'Cached: 14724372 kB' 'SwapCached: 0 kB' 'Active: 11589924 kB' 'Inactive: 3688584 kB' 'Active(anon): 11110124 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3688584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554424 kB' 'Mapped: 210140 kB' 'Shmem: 10558692 kB' 'KReclaimable: 559544 kB' 'Slab: 1437780 kB' 'SReclaimable: 559544 kB' 'SUnreclaim: 878236 kB' 'KernelStack: 27232 kB' 'PageTables: 8896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12680692 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235832 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4568436 kB' 'DirectMap2M: 29714432 kB' 'DirectMap1G: 101711872 kB' 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.401 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.402 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.403 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104737568 kB' 'MemAvailable: 108424552 kB' 'Buffers: 2704 kB' 'Cached: 14724388 kB' 'SwapCached: 0 kB' 'Active: 11589248 kB' 'Inactive: 3688584 kB' 'Active(anon): 11109448 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3688584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554148 kB' 'Mapped: 210060 kB' 'Shmem: 10558708 kB' 'KReclaimable: 559544 kB' 'Slab: 1437744 kB' 'SReclaimable: 559544 kB' 'SUnreclaim: 878200 kB' 'KernelStack: 27216 kB' 'PageTables: 8844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12680712 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235832 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4568436 kB' 'DirectMap2M: 29714432 kB' 'DirectMap1G: 101711872 kB' 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.404 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.405 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:46.406 nr_hugepages=1024 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:46.406 resv_hugepages=0 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:46.406 surplus_hugepages=0 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:46.406 anon_hugepages=0 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104737836 kB' 'MemAvailable: 108424820 kB' 'Buffers: 2704 kB' 'Cached: 14724428 kB' 'SwapCached: 0 kB' 'Active: 11589124 kB' 'Inactive: 3688584 kB' 'Active(anon): 11109324 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3688584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553992 kB' 'Mapped: 210060 kB' 'Shmem: 10558748 kB' 'KReclaimable: 559544 kB' 'Slab: 1437744 kB' 'SReclaimable: 559544 kB' 'SUnreclaim: 878200 kB' 'KernelStack: 27216 kB' 'PageTables: 8844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12680736 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235848 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4568436 kB' 'DirectMap2M: 29714432 kB' 'DirectMap1G: 101711872 kB' 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.406 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.407 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.671 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 57540352 kB' 'MemUsed: 8118656 kB' 'SwapCached: 0 kB' 'Active: 3131512 kB' 'Inactive: 235936 kB' 'Active(anon): 2892088 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 235936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3108480 kB' 'Mapped: 92956 kB' 'AnonPages: 262132 kB' 'Shmem: 2633120 kB' 'KernelStack: 14136 kB' 'PageTables: 5500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 275960 kB' 'Slab: 784392 kB' 'SReclaimable: 275960 kB' 'SUnreclaim: 508432 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.672 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.673 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.674 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.674 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.674 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.674 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.674 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.674 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.674 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.674 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.674 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.674 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.674 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.674 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:46.674 19:43:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:46.674 19:43:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.674 19:43:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.674 19:43:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.674 19:43:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.674 19:43:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:46.674 node0=1024 expecting 1024 00:03:46.674 19:43:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:46.674 00:03:46.674 real 0m4.000s 00:03:46.674 user 0m1.493s 00:03:46.674 sys 0m2.522s 00:03:46.674 19:43:34 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:46.674 19:43:34 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:46.674 ************************************ 00:03:46.674 END TEST default_setup 00:03:46.674 ************************************ 00:03:46.674 19:43:34 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:46.674 19:43:34 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:46.674 19:43:34 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:46.674 19:43:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:46.674 ************************************ 00:03:46.674 START TEST per_node_1G_alloc 00:03:46.674 ************************************ 00:03:46.674 19:43:34 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:03:46.674 19:43:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:46.674 19:43:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:46.674 19:43:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:46.674 19:43:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:46.674 19:43:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:46.674 19:43:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:46.674 19:43:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:46.674 19:43:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:46.674 19:43:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:46.674 19:43:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:46.674 19:43:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:46.674 19:43:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:46.674 19:43:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:46.674 19:43:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:46.674 19:43:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:46.674 19:43:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:46.674 19:43:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:46.674 19:43:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:46.674 19:43:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:46.674 19:43:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:46.674 19:43:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:46.674 19:43:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:46.674 19:43:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:46.674 19:43:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:46.674 19:43:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:46.674 19:43:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.674 19:43:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:49.979 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:49.979 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:49.979 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:49.979 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:49.979 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:49.979 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:49.979 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:49.979 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:49.979 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:49.979 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:49.979 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:49.979 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:49.979 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:49.979 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:49.979 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:49.979 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:49.979 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104755512 kB' 'MemAvailable: 108442496 kB' 'Buffers: 2704 kB' 'Cached: 14724528 kB' 'SwapCached: 0 kB' 'Active: 11590044 kB' 'Inactive: 3688584 kB' 'Active(anon): 11110244 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3688584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554264 kB' 'Mapped: 209000 kB' 'Shmem: 10558848 kB' 'KReclaimable: 559544 kB' 'Slab: 1437820 kB' 'SReclaimable: 559544 kB' 'SUnreclaim: 878276 kB' 'KernelStack: 27296 kB' 'PageTables: 9316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12672612 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236040 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4568436 kB' 'DirectMap2M: 29714432 kB' 'DirectMap1G: 101711872 kB' 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.243 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.244 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104755652 kB' 'MemAvailable: 108442636 kB' 'Buffers: 2704 kB' 'Cached: 14724532 kB' 'SwapCached: 0 kB' 'Active: 11588360 kB' 'Inactive: 3688584 kB' 'Active(anon): 11108560 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3688584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553456 kB' 'Mapped: 208992 kB' 'Shmem: 10558852 kB' 'KReclaimable: 559544 kB' 'Slab: 1437756 kB' 'SReclaimable: 559544 kB' 'SUnreclaim: 878212 kB' 'KernelStack: 27376 kB' 'PageTables: 8952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12671016 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236024 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4568436 kB' 'DirectMap2M: 29714432 kB' 'DirectMap1G: 101711872 kB' 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.245 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.246 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.247 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.247 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.247 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.247 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.247 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.247 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.247 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.247 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.247 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.247 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.247 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.247 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.247 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.247 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.247 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.247 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.247 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.247 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.247 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.247 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.247 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.247 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.247 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.247 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.247 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.247 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.247 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.247 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.247 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.247 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.247 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.247 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.247 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104754892 kB' 'MemAvailable: 108441876 kB' 'Buffers: 2704 kB' 'Cached: 14724548 kB' 'SwapCached: 0 kB' 'Active: 11588560 kB' 'Inactive: 3688584 kB' 'Active(anon): 11108760 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3688584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553096 kB' 'Mapped: 208992 kB' 'Shmem: 10558868 kB' 'KReclaimable: 559544 kB' 'Slab: 1437756 kB' 'SReclaimable: 559544 kB' 'SUnreclaim: 878212 kB' 'KernelStack: 27248 kB' 'PageTables: 8900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12671040 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236024 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4568436 kB' 'DirectMap2M: 29714432 kB' 'DirectMap1G: 101711872 kB' 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.517 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.518 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:50.519 nr_hugepages=1024 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:50.519 resv_hugepages=0 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:50.519 surplus_hugepages=0 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:50.519 anon_hugepages=0 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104753540 kB' 'MemAvailable: 108440524 kB' 'Buffers: 2704 kB' 'Cached: 14724572 kB' 'SwapCached: 0 kB' 'Active: 11588208 kB' 'Inactive: 3688584 kB' 'Active(anon): 11108408 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3688584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552744 kB' 'Mapped: 208992 kB' 'Shmem: 10558892 kB' 'KReclaimable: 559544 kB' 'Slab: 1437756 kB' 'SReclaimable: 559544 kB' 'SUnreclaim: 878212 kB' 'KernelStack: 27280 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12672676 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236056 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4568436 kB' 'DirectMap2M: 29714432 kB' 'DirectMap1G: 101711872 kB' 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.519 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.520 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:50.521 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58590572 kB' 'MemUsed: 7068436 kB' 'SwapCached: 0 kB' 'Active: 3131780 kB' 'Inactive: 235936 kB' 'Active(anon): 2892356 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 235936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3108596 kB' 'Mapped: 92028 kB' 'AnonPages: 262292 kB' 'Shmem: 2633236 kB' 'KernelStack: 14104 kB' 'PageTables: 5312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 275960 kB' 'Slab: 784300 kB' 'SReclaimable: 275960 kB' 'SUnreclaim: 508340 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.522 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.523 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679836 kB' 'MemFree: 46161908 kB' 'MemUsed: 14517928 kB' 'SwapCached: 0 kB' 'Active: 8456812 kB' 'Inactive: 3452648 kB' 'Active(anon): 8216436 kB' 'Inactive(anon): 0 kB' 'Active(file): 240376 kB' 'Inactive(file): 3452648 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11618704 kB' 'Mapped: 116964 kB' 'AnonPages: 290796 kB' 'Shmem: 7925680 kB' 'KernelStack: 13192 kB' 'PageTables: 3772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 283584 kB' 'Slab: 653456 kB' 'SReclaimable: 283584 kB' 'SUnreclaim: 369872 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.524 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:50.525 node0=512 expecting 512 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:50.525 node1=512 expecting 512 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:50.525 00:03:50.525 real 0m3.845s 00:03:50.525 user 0m1.581s 00:03:50.525 sys 0m2.328s 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:50.525 19:43:38 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:50.525 ************************************ 00:03:50.525 END TEST per_node_1G_alloc 00:03:50.525 ************************************ 00:03:50.525 19:43:38 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:50.525 19:43:38 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:50.525 19:43:38 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:50.525 19:43:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:50.525 ************************************ 00:03:50.525 START TEST even_2G_alloc 00:03:50.525 ************************************ 00:03:50.525 19:43:38 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:03:50.525 19:43:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:50.525 19:43:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:50.525 19:43:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:50.525 19:43:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:50.525 19:43:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:50.525 19:43:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:50.525 19:43:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:50.525 19:43:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:50.525 19:43:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:50.525 19:43:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:50.525 19:43:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:50.525 19:43:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:50.525 19:43:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:50.525 19:43:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:50.525 19:43:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:50.525 19:43:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:50.525 19:43:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:50.525 19:43:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:50.525 19:43:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:50.525 19:43:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:50.525 19:43:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:50.525 19:43:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:50.525 19:43:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:50.525 19:43:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:50.525 19:43:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:50.525 19:43:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:50.525 19:43:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.525 19:43:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:53.900 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:53.900 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:53.900 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:53.900 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:53.900 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:53.900 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:53.900 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:53.900 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:53.900 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:53.900 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:53.900 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:53.900 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:53.900 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:53.900 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:53.900 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:53.900 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:53.900 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:54.162 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:54.162 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:54.162 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:54.162 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:54.162 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:54.162 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:54.162 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:54.162 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:54.162 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:54.162 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:54.162 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:54.162 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.162 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.162 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.162 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.162 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.162 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.162 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.162 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104748108 kB' 'MemAvailable: 108435060 kB' 'Buffers: 2704 kB' 'Cached: 14724712 kB' 'SwapCached: 0 kB' 'Active: 11588244 kB' 'Inactive: 3688584 kB' 'Active(anon): 11108444 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3688584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552636 kB' 'Mapped: 209128 kB' 'Shmem: 10559032 kB' 'KReclaimable: 559512 kB' 'Slab: 1438380 kB' 'SReclaimable: 559512 kB' 'SUnreclaim: 878868 kB' 'KernelStack: 27168 kB' 'PageTables: 8396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12673564 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236152 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4568436 kB' 'DirectMap2M: 29714432 kB' 'DirectMap1G: 101711872 kB' 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.163 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.164 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104747748 kB' 'MemAvailable: 108434700 kB' 'Buffers: 2704 kB' 'Cached: 14724716 kB' 'SwapCached: 0 kB' 'Active: 11588976 kB' 'Inactive: 3688584 kB' 'Active(anon): 11109176 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3688584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553384 kB' 'Mapped: 209084 kB' 'Shmem: 10559036 kB' 'KReclaimable: 559512 kB' 'Slab: 1438344 kB' 'SReclaimable: 559512 kB' 'SUnreclaim: 878832 kB' 'KernelStack: 27328 kB' 'PageTables: 9176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12673584 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236104 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4568436 kB' 'DirectMap2M: 29714432 kB' 'DirectMap1G: 101711872 kB' 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.165 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.166 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:54.167 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104748916 kB' 'MemAvailable: 108435868 kB' 'Buffers: 2704 kB' 'Cached: 14724716 kB' 'SwapCached: 0 kB' 'Active: 11588408 kB' 'Inactive: 3688584 kB' 'Active(anon): 11108608 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3688584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552788 kB' 'Mapped: 209008 kB' 'Shmem: 10559036 kB' 'KReclaimable: 559512 kB' 'Slab: 1438348 kB' 'SReclaimable: 559512 kB' 'SUnreclaim: 878836 kB' 'KernelStack: 27264 kB' 'PageTables: 8640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12673604 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236104 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4568436 kB' 'DirectMap2M: 29714432 kB' 'DirectMap1G: 101711872 kB' 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.168 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.169 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.169 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.169 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.169 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.169 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.169 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.169 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.169 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.169 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.169 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.169 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.169 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.169 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.169 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.169 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.169 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.169 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.169 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.169 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.169 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.169 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.169 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.169 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.169 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.169 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.169 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.169 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.169 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.169 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.169 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.169 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.169 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.169 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.169 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.169 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.433 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.434 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:54.435 nr_hugepages=1024 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:54.435 resv_hugepages=0 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:54.435 surplus_hugepages=0 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:54.435 anon_hugepages=0 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104750176 kB' 'MemAvailable: 108437128 kB' 'Buffers: 2704 kB' 'Cached: 14724720 kB' 'SwapCached: 0 kB' 'Active: 11588480 kB' 'Inactive: 3688584 kB' 'Active(anon): 11108680 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3688584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552904 kB' 'Mapped: 209008 kB' 'Shmem: 10559040 kB' 'KReclaimable: 559512 kB' 'Slab: 1438316 kB' 'SReclaimable: 559512 kB' 'SUnreclaim: 878804 kB' 'KernelStack: 27200 kB' 'PageTables: 8612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12673628 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236104 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4568436 kB' 'DirectMap2M: 29714432 kB' 'DirectMap1G: 101711872 kB' 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.435 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.436 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58570328 kB' 'MemUsed: 7088680 kB' 'SwapCached: 0 kB' 'Active: 3130744 kB' 'Inactive: 235936 kB' 'Active(anon): 2891320 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 235936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3108736 kB' 'Mapped: 92028 kB' 'AnonPages: 261128 kB' 'Shmem: 2633376 kB' 'KernelStack: 14088 kB' 'PageTables: 5316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 275960 kB' 'Slab: 784612 kB' 'SReclaimable: 275960 kB' 'SUnreclaim: 508652 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.437 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679836 kB' 'MemFree: 46180380 kB' 'MemUsed: 14499456 kB' 'SwapCached: 0 kB' 'Active: 8457432 kB' 'Inactive: 3452648 kB' 'Active(anon): 8217056 kB' 'Inactive(anon): 0 kB' 'Active(file): 240376 kB' 'Inactive(file): 3452648 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11618740 kB' 'Mapped: 116980 kB' 'AnonPages: 291420 kB' 'Shmem: 7925716 kB' 'KernelStack: 13032 kB' 'PageTables: 3208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 283552 kB' 'Slab: 653768 kB' 'SReclaimable: 283552 kB' 'SUnreclaim: 370216 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.438 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.439 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.440 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.440 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.440 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.440 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.440 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.440 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.440 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.440 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.440 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.440 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.440 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.440 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.440 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.440 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.440 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.440 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.440 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.440 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:54.440 node0=512 expecting 512 00:03:54.440 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.440 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.440 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.440 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:54.440 node1=512 expecting 512 00:03:54.440 19:43:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:54.440 00:03:54.440 real 0m3.816s 00:03:54.440 user 0m1.498s 00:03:54.440 sys 0m2.370s 00:03:54.440 19:43:42 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:54.440 19:43:42 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:54.440 ************************************ 00:03:54.440 END TEST even_2G_alloc 00:03:54.440 ************************************ 00:03:54.440 19:43:42 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:54.440 19:43:42 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:54.440 19:43:42 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:54.440 19:43:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:54.440 ************************************ 00:03:54.440 START TEST odd_alloc 00:03:54.440 ************************************ 00:03:54.440 19:43:42 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:03:54.440 19:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:54.440 19:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:54.440 19:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:54.440 19:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:54.440 19:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:54.440 19:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:54.440 19:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:54.440 19:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:54.440 19:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:54.440 19:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:54.440 19:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:54.440 19:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:54.440 19:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:54.440 19:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:54.440 19:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:54.440 19:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:54.440 19:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:54.440 19:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:54.440 19:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:54.440 19:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:54.440 19:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:54.440 19:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:54.440 19:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:54.440 19:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:54.440 19:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:54.440 19:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:54.440 19:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.440 19:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:57.740 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:57.740 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:57.740 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:57.740 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:57.740 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:57.740 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:57.740 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:57.740 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:57.740 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:57.740 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:57.740 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:57.740 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:57.740 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:57.740 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:57.740 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:57.740 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:57.740 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104767620 kB' 'MemAvailable: 108454572 kB' 'Buffers: 2704 kB' 'Cached: 14724892 kB' 'SwapCached: 0 kB' 'Active: 11588632 kB' 'Inactive: 3688584 kB' 'Active(anon): 11108832 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3688584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552864 kB' 'Mapped: 209176 kB' 'Shmem: 10559212 kB' 'KReclaimable: 559512 kB' 'Slab: 1438392 kB' 'SReclaimable: 559512 kB' 'SUnreclaim: 878880 kB' 'KernelStack: 27168 kB' 'PageTables: 8636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508424 kB' 'Committed_AS: 12671656 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236136 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4568436 kB' 'DirectMap2M: 29714432 kB' 'DirectMap1G: 101711872 kB' 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.004 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104766920 kB' 'MemAvailable: 108453872 kB' 'Buffers: 2704 kB' 'Cached: 14724896 kB' 'SwapCached: 0 kB' 'Active: 11589308 kB' 'Inactive: 3688584 kB' 'Active(anon): 11109508 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3688584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553588 kB' 'Mapped: 209024 kB' 'Shmem: 10559216 kB' 'KReclaimable: 559512 kB' 'Slab: 1438392 kB' 'SReclaimable: 559512 kB' 'SUnreclaim: 878880 kB' 'KernelStack: 27264 kB' 'PageTables: 9012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508424 kB' 'Committed_AS: 12671308 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236136 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4568436 kB' 'DirectMap2M: 29714432 kB' 'DirectMap1G: 101711872 kB' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.005 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104768304 kB' 'MemAvailable: 108455256 kB' 'Buffers: 2704 kB' 'Cached: 14724908 kB' 'SwapCached: 0 kB' 'Active: 11588640 kB' 'Inactive: 3688584 kB' 'Active(anon): 11108840 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3688584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552812 kB' 'Mapped: 209024 kB' 'Shmem: 10559228 kB' 'KReclaimable: 559512 kB' 'Slab: 1438408 kB' 'SReclaimable: 559512 kB' 'SUnreclaim: 878896 kB' 'KernelStack: 27152 kB' 'PageTables: 8576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508424 kB' 'Committed_AS: 12671332 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236040 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4568436 kB' 'DirectMap2M: 29714432 kB' 'DirectMap1G: 101711872 kB' 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.006 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.270 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:58.271 nr_hugepages=1025 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:58.271 resv_hugepages=0 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:58.271 surplus_hugepages=0 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:58.271 anon_hugepages=0 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104767716 kB' 'MemAvailable: 108454668 kB' 'Buffers: 2704 kB' 'Cached: 14724932 kB' 'SwapCached: 0 kB' 'Active: 11588736 kB' 'Inactive: 3688584 kB' 'Active(anon): 11108936 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3688584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552912 kB' 'Mapped: 209024 kB' 'Shmem: 10559252 kB' 'KReclaimable: 559512 kB' 'Slab: 1438408 kB' 'SReclaimable: 559512 kB' 'SUnreclaim: 878896 kB' 'KernelStack: 27152 kB' 'PageTables: 8576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508424 kB' 'Committed_AS: 12671352 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236040 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4568436 kB' 'DirectMap2M: 29714432 kB' 'DirectMap1G: 101711872 kB' 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.271 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.272 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.273 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.273 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.273 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.273 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.273 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.273 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.273 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.273 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.273 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.273 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.273 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.273 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.273 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.273 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.273 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.273 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.273 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.273 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:58.273 19:43:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:58.273 19:43:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:58.273 19:43:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:58.273 19:43:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:58.273 19:43:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.273 19:43:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:58.273 19:43:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.273 19:43:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:58.273 19:43:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:58.273 19:43:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58596500 kB' 'MemUsed: 7062508 kB' 'SwapCached: 0 kB' 'Active: 3129860 kB' 'Inactive: 235936 kB' 'Active(anon): 2890436 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 235936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3108868 kB' 'Mapped: 92028 kB' 'AnonPages: 259548 kB' 'Shmem: 2633508 kB' 'KernelStack: 14104 kB' 'PageTables: 5268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 275960 kB' 'Slab: 784456 kB' 'SReclaimable: 275960 kB' 'SUnreclaim: 508496 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.273 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.274 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679836 kB' 'MemFree: 46171980 kB' 'MemUsed: 14507856 kB' 'SwapCached: 0 kB' 'Active: 8459496 kB' 'Inactive: 3452648 kB' 'Active(anon): 8219120 kB' 'Inactive(anon): 0 kB' 'Active(file): 240376 kB' 'Inactive(file): 3452648 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11618792 kB' 'Mapped: 117500 kB' 'AnonPages: 293428 kB' 'Shmem: 7925768 kB' 'KernelStack: 13064 kB' 'PageTables: 3308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 283552 kB' 'Slab: 653952 kB' 'SReclaimable: 283552 kB' 'SUnreclaim: 370400 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.275 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:58.276 node0=512 expecting 513 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:58.276 node1=513 expecting 512 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:58.276 00:03:58.276 real 0m3.760s 00:03:58.276 user 0m1.450s 00:03:58.276 sys 0m2.359s 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:58.276 19:43:46 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:58.276 ************************************ 00:03:58.276 END TEST odd_alloc 00:03:58.276 ************************************ 00:03:58.276 19:43:46 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:58.276 19:43:46 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:58.276 19:43:46 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:58.276 19:43:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:58.276 ************************************ 00:03:58.276 START TEST custom_alloc 00:03:58.276 ************************************ 00:03:58.276 19:43:46 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:03:58.276 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:58.276 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:58.276 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:58.276 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:58.276 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:58.276 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:58.276 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:58.276 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:58.276 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:58.276 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:58.276 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:58.276 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:58.276 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:58.276 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:58.276 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:58.276 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:58.276 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:58.276 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:58.276 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:58.276 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:58.276 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:58.276 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:58.276 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:58.276 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:58.276 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:58.276 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:58.276 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:58.276 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:58.276 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.277 19:43:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:01.578 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:01.578 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:01.578 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:01.578 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:01.578 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:01.578 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:01.578 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:01.578 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:01.578 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:01.578 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:01.578 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:01.578 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:01.578 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:01.578 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:01.578 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:01.578 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:01.578 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:01.578 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103729856 kB' 'MemAvailable: 107416808 kB' 'Buffers: 2704 kB' 'Cached: 14725064 kB' 'SwapCached: 0 kB' 'Active: 11595024 kB' 'Inactive: 3688584 kB' 'Active(anon): 11115224 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3688584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559352 kB' 'Mapped: 209900 kB' 'Shmem: 10559384 kB' 'KReclaimable: 559512 kB' 'Slab: 1437876 kB' 'SReclaimable: 559512 kB' 'SUnreclaim: 878364 kB' 'KernelStack: 27168 kB' 'PageTables: 8668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985160 kB' 'Committed_AS: 12678420 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235964 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4568436 kB' 'DirectMap2M: 29714432 kB' 'DirectMap1G: 101711872 kB' 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.579 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.580 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103730816 kB' 'MemAvailable: 107417768 kB' 'Buffers: 2704 kB' 'Cached: 14725068 kB' 'SwapCached: 0 kB' 'Active: 11588972 kB' 'Inactive: 3688584 kB' 'Active(anon): 11109172 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3688584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553228 kB' 'Mapped: 209128 kB' 'Shmem: 10559388 kB' 'KReclaimable: 559512 kB' 'Slab: 1437876 kB' 'SReclaimable: 559512 kB' 'SUnreclaim: 878364 kB' 'KernelStack: 27104 kB' 'PageTables: 8432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985160 kB' 'Committed_AS: 12672320 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235944 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4568436 kB' 'DirectMap2M: 29714432 kB' 'DirectMap1G: 101711872 kB' 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.581 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.582 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.583 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103729808 kB' 'MemAvailable: 107416760 kB' 'Buffers: 2704 kB' 'Cached: 14725068 kB' 'SwapCached: 0 kB' 'Active: 11589492 kB' 'Inactive: 3688584 kB' 'Active(anon): 11109692 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3688584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553704 kB' 'Mapped: 209052 kB' 'Shmem: 10559388 kB' 'KReclaimable: 559512 kB' 'Slab: 1437892 kB' 'SReclaimable: 559512 kB' 'SUnreclaim: 878380 kB' 'KernelStack: 27168 kB' 'PageTables: 8624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985160 kB' 'Committed_AS: 12672340 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235944 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4568436 kB' 'DirectMap2M: 29714432 kB' 'DirectMap1G: 101711872 kB' 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.584 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.585 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:01.586 nr_hugepages=1536 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:01.586 resv_hugepages=0 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:01.586 surplus_hugepages=0 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:01.586 anon_hugepages=0 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103729820 kB' 'MemAvailable: 107416772 kB' 'Buffers: 2704 kB' 'Cached: 14725096 kB' 'SwapCached: 0 kB' 'Active: 11589512 kB' 'Inactive: 3688584 kB' 'Active(anon): 11109712 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3688584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553696 kB' 'Mapped: 209052 kB' 'Shmem: 10559416 kB' 'KReclaimable: 559512 kB' 'Slab: 1437892 kB' 'SReclaimable: 559512 kB' 'SUnreclaim: 878380 kB' 'KernelStack: 27168 kB' 'PageTables: 8624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985160 kB' 'Committed_AS: 12672360 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235944 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4568436 kB' 'DirectMap2M: 29714432 kB' 'DirectMap1G: 101711872 kB' 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.586 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.587 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58598628 kB' 'MemUsed: 7060380 kB' 'SwapCached: 0 kB' 'Active: 3130428 kB' 'Inactive: 235936 kB' 'Active(anon): 2891004 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 235936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3108948 kB' 'Mapped: 92028 kB' 'AnonPages: 260544 kB' 'Shmem: 2633588 kB' 'KernelStack: 14088 kB' 'PageTables: 5200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 275960 kB' 'Slab: 784236 kB' 'SReclaimable: 275960 kB' 'SUnreclaim: 508276 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.588 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679836 kB' 'MemFree: 45131148 kB' 'MemUsed: 15548688 kB' 'SwapCached: 0 kB' 'Active: 8459288 kB' 'Inactive: 3452648 kB' 'Active(anon): 8218912 kB' 'Inactive(anon): 0 kB' 'Active(file): 240376 kB' 'Inactive(file): 3452648 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11618892 kB' 'Mapped: 117024 kB' 'AnonPages: 293344 kB' 'Shmem: 7925868 kB' 'KernelStack: 13112 kB' 'PageTables: 3512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 283552 kB' 'Slab: 653656 kB' 'SReclaimable: 283552 kB' 'SUnreclaim: 370104 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.589 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.852 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.853 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.854 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.854 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.854 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.854 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.854 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.854 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.854 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.854 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.854 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.854 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.854 19:43:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:01.854 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.854 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.854 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.854 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.854 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:01.854 node0=512 expecting 512 00:04:01.854 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.854 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.854 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.854 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:01.854 node1=1024 expecting 1024 00:04:01.854 19:43:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:01.854 00:04:01.854 real 0m3.429s 00:04:01.854 user 0m1.318s 00:04:01.854 sys 0m2.098s 00:04:01.854 19:43:49 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:01.854 19:43:49 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:01.854 ************************************ 00:04:01.854 END TEST custom_alloc 00:04:01.854 ************************************ 00:04:01.854 19:43:49 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:01.854 19:43:49 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:01.854 19:43:49 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:01.854 19:43:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:01.854 ************************************ 00:04:01.854 START TEST no_shrink_alloc 00:04:01.854 ************************************ 00:04:01.854 19:43:49 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:04:01.854 19:43:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:01.854 19:43:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:01.854 19:43:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:01.854 19:43:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:01.854 19:43:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:01.854 19:43:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:01.854 19:43:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:01.854 19:43:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:01.854 19:43:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:01.854 19:43:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:01.854 19:43:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:01.854 19:43:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:01.854 19:43:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:01.854 19:43:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:01.854 19:43:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:01.854 19:43:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:01.854 19:43:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:01.854 19:43:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:01.854 19:43:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:01.854 19:43:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:01.854 19:43:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.854 19:43:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:05.157 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:05.157 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:05.157 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:05.157 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:05.157 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:05.157 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:05.157 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:05.157 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:05.157 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:05.157 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:05.157 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:05.157 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:05.157 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:05.157 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:05.157 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:05.157 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:05.157 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:05.423 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:05.423 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:05.423 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:05.423 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:05.423 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:05.423 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:05.423 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:05.423 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:05.423 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:05.423 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:05.423 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.423 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.423 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.423 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.423 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.423 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.423 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.423 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.423 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.423 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.423 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104778816 kB' 'MemAvailable: 108465768 kB' 'Buffers: 2704 kB' 'Cached: 14725240 kB' 'SwapCached: 0 kB' 'Active: 11591520 kB' 'Inactive: 3688584 kB' 'Active(anon): 11111720 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3688584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555596 kB' 'Mapped: 209096 kB' 'Shmem: 10559560 kB' 'KReclaimable: 559512 kB' 'Slab: 1437896 kB' 'SReclaimable: 559512 kB' 'SUnreclaim: 878384 kB' 'KernelStack: 27152 kB' 'PageTables: 8580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12673248 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235832 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4568436 kB' 'DirectMap2M: 29714432 kB' 'DirectMap1G: 101711872 kB' 00:04:05.423 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.423 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.423 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.423 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.423 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.423 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.423 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.423 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.423 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.423 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.423 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.423 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.423 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.423 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.423 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.423 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.423 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.423 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.424 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104780396 kB' 'MemAvailable: 108467348 kB' 'Buffers: 2704 kB' 'Cached: 14725240 kB' 'SwapCached: 0 kB' 'Active: 11591700 kB' 'Inactive: 3688584 kB' 'Active(anon): 11111900 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3688584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555792 kB' 'Mapped: 209096 kB' 'Shmem: 10559560 kB' 'KReclaimable: 559512 kB' 'Slab: 1437928 kB' 'SReclaimable: 559512 kB' 'SUnreclaim: 878416 kB' 'KernelStack: 27120 kB' 'PageTables: 8512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12673264 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235832 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4568436 kB' 'DirectMap2M: 29714432 kB' 'DirectMap1G: 101711872 kB' 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.425 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.426 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104779384 kB' 'MemAvailable: 108466336 kB' 'Buffers: 2704 kB' 'Cached: 14725260 kB' 'SwapCached: 0 kB' 'Active: 11592348 kB' 'Inactive: 3688584 kB' 'Active(anon): 11112548 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3688584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556396 kB' 'Mapped: 209096 kB' 'Shmem: 10559580 kB' 'KReclaimable: 559512 kB' 'Slab: 1437896 kB' 'SReclaimable: 559512 kB' 'SUnreclaim: 878384 kB' 'KernelStack: 27136 kB' 'PageTables: 8640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12672920 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235848 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4568436 kB' 'DirectMap2M: 29714432 kB' 'DirectMap1G: 101711872 kB' 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.427 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.428 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:05.429 nr_hugepages=1024 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.429 resv_hugepages=0 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.429 surplus_hugepages=0 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.429 anon_hugepages=0 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104779424 kB' 'MemAvailable: 108466376 kB' 'Buffers: 2704 kB' 'Cached: 14725280 kB' 'SwapCached: 0 kB' 'Active: 11591468 kB' 'Inactive: 3688584 kB' 'Active(anon): 11111668 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3688584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555408 kB' 'Mapped: 209096 kB' 'Shmem: 10559600 kB' 'KReclaimable: 559512 kB' 'Slab: 1437896 kB' 'SReclaimable: 559512 kB' 'SUnreclaim: 878384 kB' 'KernelStack: 27056 kB' 'PageTables: 8240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12672940 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235832 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4568436 kB' 'DirectMap2M: 29714432 kB' 'DirectMap1G: 101711872 kB' 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.429 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.430 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 57562416 kB' 'MemUsed: 8096592 kB' 'SwapCached: 0 kB' 'Active: 3130820 kB' 'Inactive: 235936 kB' 'Active(anon): 2891396 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 235936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3108988 kB' 'Mapped: 92036 kB' 'AnonPages: 260924 kB' 'Shmem: 2633628 kB' 'KernelStack: 14024 kB' 'PageTables: 5028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 275960 kB' 'Slab: 784260 kB' 'SReclaimable: 275960 kB' 'SUnreclaim: 508300 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.431 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:05.432 node0=1024 expecting 1024 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:05.432 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:05.433 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.433 19:43:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:08.740 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:08.740 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:08.740 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:08.740 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:08.740 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:08.740 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:08.740 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:08.740 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:08.740 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:08.740 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:08.740 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:08.740 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:08.740 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:08.740 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:08.740 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:08.740 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:08.740 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:08.740 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104791828 kB' 'MemAvailable: 108478780 kB' 'Buffers: 2704 kB' 'Cached: 14725392 kB' 'SwapCached: 0 kB' 'Active: 11591972 kB' 'Inactive: 3688584 kB' 'Active(anon): 11112172 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3688584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555708 kB' 'Mapped: 209136 kB' 'Shmem: 10559712 kB' 'KReclaimable: 559512 kB' 'Slab: 1437212 kB' 'SReclaimable: 559512 kB' 'SUnreclaim: 877700 kB' 'KernelStack: 27152 kB' 'PageTables: 8592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12674248 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235880 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4568436 kB' 'DirectMap2M: 29714432 kB' 'DirectMap1G: 101711872 kB' 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.740 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.741 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104792956 kB' 'MemAvailable: 108479908 kB' 'Buffers: 2704 kB' 'Cached: 14725396 kB' 'SwapCached: 0 kB' 'Active: 11592356 kB' 'Inactive: 3688584 kB' 'Active(anon): 11112556 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3688584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556080 kB' 'Mapped: 209100 kB' 'Shmem: 10559716 kB' 'KReclaimable: 559512 kB' 'Slab: 1437164 kB' 'SReclaimable: 559512 kB' 'SUnreclaim: 877652 kB' 'KernelStack: 27168 kB' 'PageTables: 8620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12674264 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235864 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4568436 kB' 'DirectMap2M: 29714432 kB' 'DirectMap1G: 101711872 kB' 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.742 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.743 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104793340 kB' 'MemAvailable: 108480292 kB' 'Buffers: 2704 kB' 'Cached: 14725396 kB' 'SwapCached: 0 kB' 'Active: 11592368 kB' 'Inactive: 3688584 kB' 'Active(anon): 11112568 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3688584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556124 kB' 'Mapped: 209100 kB' 'Shmem: 10559716 kB' 'KReclaimable: 559512 kB' 'Slab: 1437164 kB' 'SReclaimable: 559512 kB' 'SUnreclaim: 877652 kB' 'KernelStack: 27200 kB' 'PageTables: 8728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12674288 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235864 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4568436 kB' 'DirectMap2M: 29714432 kB' 'DirectMap1G: 101711872 kB' 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.744 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.745 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:08.746 nr_hugepages=1024 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:08.746 resv_hugepages=0 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:08.746 surplus_hugepages=0 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:08.746 anon_hugepages=0 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104793508 kB' 'MemAvailable: 108480460 kB' 'Buffers: 2704 kB' 'Cached: 14725436 kB' 'SwapCached: 0 kB' 'Active: 11592352 kB' 'Inactive: 3688584 kB' 'Active(anon): 11112552 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3688584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556092 kB' 'Mapped: 209100 kB' 'Shmem: 10559756 kB' 'KReclaimable: 559512 kB' 'Slab: 1437164 kB' 'SReclaimable: 559512 kB' 'SUnreclaim: 877652 kB' 'KernelStack: 27184 kB' 'PageTables: 8680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12674308 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235864 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4568436 kB' 'DirectMap2M: 29714432 kB' 'DirectMap1G: 101711872 kB' 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.746 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.747 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 57583164 kB' 'MemUsed: 8075844 kB' 'SwapCached: 0 kB' 'Active: 3130504 kB' 'Inactive: 235936 kB' 'Active(anon): 2891080 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 235936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3108992 kB' 'Mapped: 92028 kB' 'AnonPages: 260520 kB' 'Shmem: 2633632 kB' 'KernelStack: 14088 kB' 'PageTables: 5200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 275960 kB' 'Slab: 783724 kB' 'SReclaimable: 275960 kB' 'SUnreclaim: 507764 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.748 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.749 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.750 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.750 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.750 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.750 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.750 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.750 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.750 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.750 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.750 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.750 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.750 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.750 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.750 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.750 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.750 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.750 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.750 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.750 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.750 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.750 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.750 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.750 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.750 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.750 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.750 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.750 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.750 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:08.750 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:08.750 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:08.750 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:08.750 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:08.750 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:08.750 node0=1024 expecting 1024 00:04:08.750 19:43:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:08.750 00:04:08.750 real 0m6.911s 00:04:08.750 user 0m2.631s 00:04:08.750 sys 0m4.318s 00:04:08.750 19:43:56 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:08.750 19:43:56 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:08.750 ************************************ 00:04:08.750 END TEST no_shrink_alloc 00:04:08.750 ************************************ 00:04:08.750 19:43:56 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:08.750 19:43:56 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:08.750 19:43:56 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:08.750 19:43:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.750 19:43:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:08.750 19:43:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.750 19:43:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:08.750 19:43:56 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:08.750 19:43:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.750 19:43:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:08.750 19:43:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.750 19:43:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:08.750 19:43:56 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:08.750 19:43:56 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:08.750 00:04:08.750 real 0m26.388s 00:04:08.750 user 0m10.209s 00:04:08.750 sys 0m16.418s 00:04:08.750 19:43:56 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:08.750 19:43:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:08.750 ************************************ 00:04:08.750 END TEST hugepages 00:04:08.750 ************************************ 00:04:08.750 19:43:56 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:08.750 19:43:56 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:08.750 19:43:56 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:08.750 19:43:56 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:08.750 ************************************ 00:04:08.750 START TEST driver 00:04:08.750 ************************************ 00:04:08.750 19:43:56 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:09.012 * Looking for test storage... 00:04:09.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:09.012 19:43:56 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:09.012 19:43:56 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:09.012 19:43:56 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:14.374 19:44:01 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:14.374 19:44:01 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:14.374 19:44:01 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:14.374 19:44:01 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:14.374 ************************************ 00:04:14.374 START TEST guess_driver 00:04:14.374 ************************************ 00:04:14.374 19:44:01 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:04:14.374 19:44:01 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:14.374 19:44:01 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:14.374 19:44:01 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:14.374 19:44:01 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:14.374 19:44:01 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:14.374 19:44:01 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:14.374 19:44:01 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:14.374 19:44:01 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:14.374 19:44:01 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:14.374 19:44:01 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 314 > 0 )) 00:04:14.374 19:44:01 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:14.374 19:44:01 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:14.374 19:44:01 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:14.374 19:44:01 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:14.374 19:44:01 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:14.374 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:14.374 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:14.374 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:14.374 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:14.374 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:14.374 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:14.374 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:14.374 19:44:01 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:14.374 19:44:01 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:14.374 19:44:01 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:14.374 19:44:01 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:14.374 19:44:01 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:14.374 Looking for driver=vfio-pci 00:04:14.375 19:44:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.375 19:44:01 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:14.375 19:44:01 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.375 19:44:01 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.681 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.943 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:17.943 19:44:05 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:17.943 19:44:05 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:17.943 19:44:05 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:23.241 00:04:23.241 real 0m8.841s 00:04:23.241 user 0m3.078s 00:04:23.241 sys 0m5.000s 00:04:23.241 19:44:10 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:23.241 19:44:10 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:23.241 ************************************ 00:04:23.241 END TEST guess_driver 00:04:23.241 ************************************ 00:04:23.241 00:04:23.241 real 0m13.919s 00:04:23.241 user 0m4.652s 00:04:23.241 sys 0m7.719s 00:04:23.241 19:44:10 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:23.241 19:44:10 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:23.241 ************************************ 00:04:23.241 END TEST driver 00:04:23.241 ************************************ 00:04:23.241 19:44:10 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:23.241 19:44:10 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:23.241 19:44:10 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:23.241 19:44:10 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:23.241 ************************************ 00:04:23.241 START TEST devices 00:04:23.241 ************************************ 00:04:23.241 19:44:10 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:23.241 * Looking for test storage... 00:04:23.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:23.241 19:44:10 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:23.241 19:44:10 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:23.241 19:44:10 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:23.241 19:44:10 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:27.450 19:44:14 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:27.450 19:44:14 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:27.450 19:44:14 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:27.450 19:44:14 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:27.450 19:44:14 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:27.450 19:44:14 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:27.450 19:44:14 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:27.450 19:44:14 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:27.450 19:44:14 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:27.450 19:44:14 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:27.450 19:44:14 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:27.450 19:44:14 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:27.450 19:44:14 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:27.450 19:44:14 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:27.450 19:44:14 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:27.450 19:44:14 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:27.450 19:44:14 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:27.450 19:44:14 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:27.450 19:44:14 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:27.450 19:44:14 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:27.450 19:44:14 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:27.450 19:44:14 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:27.450 No valid GPT data, bailing 00:04:27.450 19:44:14 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:27.450 19:44:14 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:27.450 19:44:14 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:27.450 19:44:14 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:27.450 19:44:14 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:27.450 19:44:14 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:27.450 19:44:14 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:04:27.450 19:44:14 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:27.450 19:44:14 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:27.450 19:44:14 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:27.450 19:44:14 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:27.450 19:44:14 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:27.450 19:44:14 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:27.450 19:44:14 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:27.450 19:44:14 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.450 19:44:14 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:27.450 ************************************ 00:04:27.450 START TEST nvme_mount 00:04:27.450 ************************************ 00:04:27.450 19:44:14 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:04:27.450 19:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:27.450 19:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:27.450 19:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.451 19:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:27.451 19:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:27.451 19:44:14 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:27.451 19:44:14 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:27.451 19:44:14 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:27.451 19:44:14 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:27.451 19:44:14 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:27.451 19:44:14 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:27.451 19:44:14 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:27.451 19:44:14 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:27.451 19:44:14 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:27.451 19:44:14 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:27.451 19:44:14 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:27.451 19:44:14 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:27.451 19:44:14 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:27.451 19:44:14 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:28.025 Creating new GPT entries in memory. 00:04:28.025 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:28.025 other utilities. 00:04:28.025 19:44:15 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:28.025 19:44:15 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:28.025 19:44:15 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:28.025 19:44:15 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:28.025 19:44:15 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:28.968 Creating new GPT entries in memory. 00:04:28.968 The operation has completed successfully. 00:04:28.968 19:44:16 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:28.968 19:44:16 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:28.968 19:44:16 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3439875 00:04:28.968 19:44:16 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.969 19:44:16 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:28.969 19:44:16 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.969 19:44:16 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:28.969 19:44:16 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:28.969 19:44:16 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.969 19:44:16 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:28.969 19:44:16 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:28.969 19:44:16 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:28.969 19:44:16 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.969 19:44:16 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:28.969 19:44:16 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:28.969 19:44:16 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:28.969 19:44:16 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:28.969 19:44:16 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:28.969 19:44:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.969 19:44:16 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:28.969 19:44:16 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:28.969 19:44:16 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.969 19:44:16 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:32.272 19:44:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.272 19:44:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.272 19:44:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.272 19:44:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.272 19:44:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.272 19:44:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.272 19:44:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.272 19:44:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.272 19:44:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.272 19:44:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.272 19:44:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.272 19:44:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.272 19:44:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.272 19:44:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.272 19:44:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.272 19:44:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.272 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.272 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:32.272 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:32.272 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.272 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.272 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.272 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.272 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.272 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.272 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.272 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.272 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.272 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.272 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.272 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.272 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.272 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.272 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.272 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.272 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.534 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:32.534 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:32.534 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.795 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:32.795 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:32.795 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:32.795 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.795 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.795 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:32.795 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:32.795 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:32.795 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:32.795 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:33.056 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:33.056 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:33.056 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:33.056 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:33.056 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:33.056 19:44:20 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:33.056 19:44:20 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.056 19:44:20 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:33.056 19:44:20 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:33.056 19:44:20 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.056 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:33.056 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:33.056 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:33.056 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.056 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:33.056 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:33.056 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:33.056 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:33.056 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:33.056 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.056 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:33.056 19:44:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:33.056 19:44:20 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.056 19:44:20 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:36.358 19:44:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.358 19:44:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.358 19:44:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.358 19:44:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.358 19:44:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.358 19:44:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.358 19:44:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.358 19:44:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.358 19:44:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.358 19:44:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.358 19:44:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.358 19:44:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.358 19:44:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.358 19:44:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.358 19:44:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.358 19:44:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.358 19:44:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.358 19:44:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:36.358 19:44:24 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:36.358 19:44:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.358 19:44:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.358 19:44:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.358 19:44:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.358 19:44:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.358 19:44:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.358 19:44:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.358 19:44:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.358 19:44:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.358 19:44:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.358 19:44:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.358 19:44:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.358 19:44:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.358 19:44:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.358 19:44:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.358 19:44:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.358 19:44:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.618 19:44:24 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:36.618 19:44:24 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:36.618 19:44:24 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.618 19:44:24 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:36.618 19:44:24 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:36.618 19:44:24 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.618 19:44:24 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:36.618 19:44:24 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:36.618 19:44:24 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:36.618 19:44:24 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:36.618 19:44:24 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:36.618 19:44:24 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:36.618 19:44:24 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:36.618 19:44:24 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:36.618 19:44:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.618 19:44:24 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:36.618 19:44:24 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:36.618 19:44:24 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.618 19:44:24 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:39.951 19:44:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.951 19:44:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.951 19:44:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.951 19:44:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.951 19:44:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.951 19:44:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.951 19:44:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.951 19:44:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.951 19:44:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.951 19:44:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.951 19:44:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.951 19:44:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.951 19:44:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.951 19:44:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.951 19:44:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.951 19:44:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.951 19:44:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.951 19:44:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:39.951 19:44:27 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:39.951 19:44:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.951 19:44:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.951 19:44:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.951 19:44:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.951 19:44:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.951 19:44:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.951 19:44:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.951 19:44:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.951 19:44:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.951 19:44:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.951 19:44:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.951 19:44:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.951 19:44:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.951 19:44:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.952 19:44:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.952 19:44:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.952 19:44:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.212 19:44:28 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:40.212 19:44:28 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:40.212 19:44:28 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:40.212 19:44:28 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:40.212 19:44:28 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:40.212 19:44:28 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:40.212 19:44:28 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:40.212 19:44:28 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:40.212 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:40.212 00:04:40.212 real 0m13.317s 00:04:40.212 user 0m4.100s 00:04:40.212 sys 0m7.077s 00:04:40.212 19:44:28 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.212 19:44:28 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:40.212 ************************************ 00:04:40.212 END TEST nvme_mount 00:04:40.212 ************************************ 00:04:40.212 19:44:28 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:40.212 19:44:28 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.212 19:44:28 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.212 19:44:28 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:40.212 ************************************ 00:04:40.212 START TEST dm_mount 00:04:40.212 ************************************ 00:04:40.212 19:44:28 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:04:40.212 19:44:28 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:40.212 19:44:28 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:40.212 19:44:28 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:40.212 19:44:28 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:40.212 19:44:28 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:40.212 19:44:28 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:40.212 19:44:28 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:40.212 19:44:28 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:40.212 19:44:28 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:40.212 19:44:28 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:40.212 19:44:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:40.212 19:44:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:40.212 19:44:28 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:40.212 19:44:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:40.212 19:44:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:40.212 19:44:28 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:40.212 19:44:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:40.212 19:44:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:40.212 19:44:28 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:40.212 19:44:28 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:40.212 19:44:28 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:41.598 Creating new GPT entries in memory. 00:04:41.598 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:41.598 other utilities. 00:04:41.598 19:44:29 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:41.598 19:44:29 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:41.598 19:44:29 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:41.598 19:44:29 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:41.598 19:44:29 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:42.542 Creating new GPT entries in memory. 00:04:42.542 The operation has completed successfully. 00:04:42.542 19:44:30 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:42.542 19:44:30 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:42.542 19:44:30 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:42.542 19:44:30 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:42.542 19:44:30 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:43.487 The operation has completed successfully. 00:04:43.487 19:44:31 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:43.487 19:44:31 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:43.487 19:44:31 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3444874 00:04:43.487 19:44:31 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:43.487 19:44:31 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:43.487 19:44:31 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:43.487 19:44:31 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:43.487 19:44:31 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:43.487 19:44:31 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:43.487 19:44:31 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:43.487 19:44:31 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:43.487 19:44:31 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:43.487 19:44:31 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:43.487 19:44:31 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:43.487 19:44:31 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:43.487 19:44:31 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:43.487 19:44:31 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:43.487 19:44:31 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:43.487 19:44:31 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:43.487 19:44:31 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:43.487 19:44:31 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:43.487 19:44:31 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:43.487 19:44:31 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:43.487 19:44:31 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:43.487 19:44:31 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:43.487 19:44:31 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:43.487 19:44:31 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:43.487 19:44:31 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:43.487 19:44:31 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:43.487 19:44:31 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:43.487 19:44:31 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:43.487 19:44:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.487 19:44:31 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:43.487 19:44:31 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:43.487 19:44:31 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.487 19:44:31 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:46.793 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.793 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.793 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.793 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.793 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.793 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.793 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.793 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.793 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.793 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.793 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.793 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.793 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.793 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.793 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.793 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.793 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.793 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:46.793 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:46.793 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.793 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.793 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.793 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.793 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.793 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.793 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.793 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.793 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.793 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.793 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.793 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.793 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.793 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.793 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.793 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.793 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.055 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:47.055 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:47.055 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:47.055 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:47.055 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:47.055 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:47.055 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:47.055 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:47.055 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:47.055 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:47.055 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:47.055 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:47.055 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:47.055 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:47.055 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.055 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:47.055 19:44:34 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:47.055 19:44:34 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.055 19:44:34 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:50.361 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.361 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.361 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.361 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.361 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.361 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.361 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.361 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.361 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.361 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.361 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.361 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.361 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.361 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.361 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.361 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.361 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.361 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:50.361 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:50.361 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.361 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.361 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.361 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.361 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.361 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.361 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.361 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.361 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.361 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.361 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.361 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.361 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.361 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.361 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.361 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.361 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.623 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:50.623 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:50.623 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:50.623 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:50.623 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:50.623 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:50.623 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:50.623 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:50.623 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:50.623 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:50.623 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:50.623 19:44:38 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:50.623 00:04:50.623 real 0m10.398s 00:04:50.623 user 0m2.691s 00:04:50.623 sys 0m4.741s 00:04:50.623 19:44:38 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.623 19:44:38 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:50.623 ************************************ 00:04:50.623 END TEST dm_mount 00:04:50.623 ************************************ 00:04:50.884 19:44:38 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:50.884 19:44:38 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:50.884 19:44:38 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.884 19:44:38 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:50.885 19:44:38 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:50.885 19:44:38 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:50.885 19:44:38 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:51.146 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:51.146 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:51.146 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:51.146 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:51.146 19:44:38 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:51.146 19:44:38 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:51.146 19:44:38 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:51.146 19:44:38 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:51.146 19:44:38 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:51.146 19:44:38 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:51.146 19:44:38 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:51.146 00:04:51.146 real 0m28.221s 00:04:51.146 user 0m8.384s 00:04:51.146 sys 0m14.590s 00:04:51.146 19:44:38 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:51.146 19:44:38 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:51.146 ************************************ 00:04:51.146 END TEST devices 00:04:51.146 ************************************ 00:04:51.146 00:04:51.146 real 1m34.564s 00:04:51.146 user 0m31.843s 00:04:51.146 sys 0m53.869s 00:04:51.146 19:44:38 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:51.146 19:44:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:51.146 ************************************ 00:04:51.146 END TEST setup.sh 00:04:51.146 ************************************ 00:04:51.146 19:44:38 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:54.451 Hugepages 00:04:54.451 node hugesize free / total 00:04:54.451 node0 1048576kB 0 / 0 00:04:54.451 node0 2048kB 2048 / 2048 00:04:54.451 node1 1048576kB 0 / 0 00:04:54.451 node1 2048kB 0 / 0 00:04:54.451 00:04:54.451 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:54.451 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:54.451 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:54.451 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:54.451 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:54.451 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:54.451 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:54.451 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:54.451 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:54.451 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:54.451 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:54.451 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:54.451 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:54.451 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:54.451 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:54.451 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:54.451 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:54.451 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:54.451 19:44:42 -- spdk/autotest.sh@130 -- # uname -s 00:04:54.451 19:44:42 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:54.451 19:44:42 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:54.451 19:44:42 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:57.753 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:57.753 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:57.753 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:58.015 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:58.015 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:58.015 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:58.015 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:58.015 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:58.015 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:58.015 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:58.015 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:58.015 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:58.015 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:58.015 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:58.015 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:58.015 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:59.930 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:00.239 19:44:47 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:01.183 19:44:48 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:01.183 19:44:48 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:01.183 19:44:48 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:01.183 19:44:48 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:01.183 19:44:48 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:01.183 19:44:48 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:01.183 19:44:48 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:01.183 19:44:48 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:01.183 19:44:48 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:01.183 19:44:49 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:01.183 19:44:49 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:05:01.183 19:44:49 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:04.485 Waiting for block devices as requested 00:05:04.485 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:04.485 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:04.747 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:04.747 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:04.747 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:05.007 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:05.007 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:05.007 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:05.007 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:05:05.269 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:05.269 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:05.530 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:05.530 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:05.530 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:05.790 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:05.790 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:05.790 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:06.051 19:44:53 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:06.051 19:44:53 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:06.051 19:44:53 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:06.051 19:44:53 -- common/autotest_common.sh@1502 -- # grep 0000:65:00.0/nvme/nvme 00:05:06.051 19:44:53 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:06.051 19:44:53 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:06.051 19:44:53 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:06.051 19:44:53 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:06.051 19:44:53 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:06.051 19:44:53 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:06.051 19:44:53 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:06.051 19:44:53 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:06.051 19:44:53 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:06.051 19:44:53 -- common/autotest_common.sh@1545 -- # oacs=' 0x5f' 00:05:06.051 19:44:53 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:06.051 19:44:53 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:06.051 19:44:53 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:06.051 19:44:53 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:06.051 19:44:53 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:06.051 19:44:53 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:06.051 19:44:53 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:06.051 19:44:53 -- common/autotest_common.sh@1557 -- # continue 00:05:06.051 19:44:53 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:06.051 19:44:53 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:06.051 19:44:53 -- common/autotest_common.sh@10 -- # set +x 00:05:06.051 19:44:53 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:06.051 19:44:53 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:06.051 19:44:53 -- common/autotest_common.sh@10 -- # set +x 00:05:06.051 19:44:53 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:09.352 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:09.352 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:09.612 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:09.612 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:09.612 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:09.612 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:09.612 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:09.612 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:09.612 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:09.612 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:09.612 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:09.612 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:09.612 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:09.612 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:09.612 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:09.612 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:09.612 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:10.184 19:44:57 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:10.184 19:44:57 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:10.184 19:44:57 -- common/autotest_common.sh@10 -- # set +x 00:05:10.184 19:44:57 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:10.184 19:44:57 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:10.184 19:44:57 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:10.184 19:44:57 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:10.184 19:44:57 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:10.184 19:44:57 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:10.184 19:44:57 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:10.184 19:44:57 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:10.184 19:44:57 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:10.184 19:44:57 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:10.184 19:44:57 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:10.184 19:44:57 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:10.184 19:44:57 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:05:10.184 19:44:57 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:10.184 19:44:57 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:10.184 19:44:57 -- common/autotest_common.sh@1580 -- # device=0xa80a 00:05:10.184 19:44:57 -- common/autotest_common.sh@1581 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:10.184 19:44:57 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:05:10.184 19:44:57 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:05:10.184 19:44:57 -- common/autotest_common.sh@1593 -- # return 0 00:05:10.184 19:44:57 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:10.184 19:44:57 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:10.184 19:44:57 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:10.184 19:44:57 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:10.184 19:44:57 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:10.184 19:44:57 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:10.184 19:44:57 -- common/autotest_common.sh@10 -- # set +x 00:05:10.184 19:44:57 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:10.185 19:44:57 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:10.185 19:44:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.185 19:44:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.185 19:44:57 -- common/autotest_common.sh@10 -- # set +x 00:05:10.185 ************************************ 00:05:10.185 START TEST env 00:05:10.185 ************************************ 00:05:10.185 19:44:58 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:10.185 * Looking for test storage... 00:05:10.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:10.185 19:44:58 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:10.185 19:44:58 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.185 19:44:58 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.185 19:44:58 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.446 ************************************ 00:05:10.446 START TEST env_memory 00:05:10.446 ************************************ 00:05:10.446 19:44:58 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:10.446 00:05:10.446 00:05:10.446 CUnit - A unit testing framework for C - Version 2.1-3 00:05:10.446 http://cunit.sourceforge.net/ 00:05:10.446 00:05:10.446 00:05:10.446 Suite: memory 00:05:10.446 Test: alloc and free memory map ...[2024-07-24 19:44:58.205250] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:10.446 passed 00:05:10.446 Test: mem map translation ...[2024-07-24 19:44:58.230903] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:10.446 [2024-07-24 19:44:58.230931] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:10.446 [2024-07-24 19:44:58.230980] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:10.446 [2024-07-24 19:44:58.230989] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:10.446 passed 00:05:10.446 Test: mem map registration ...[2024-07-24 19:44:58.286379] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:10.446 [2024-07-24 19:44:58.286409] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:10.446 passed 00:05:10.446 Test: mem map adjacent registrations ...passed 00:05:10.446 00:05:10.446 Run Summary: Type Total Ran Passed Failed Inactive 00:05:10.446 suites 1 1 n/a 0 0 00:05:10.446 tests 4 4 4 0 0 00:05:10.446 asserts 152 152 152 0 n/a 00:05:10.446 00:05:10.446 Elapsed time = 0.194 seconds 00:05:10.446 00:05:10.446 real 0m0.209s 00:05:10.446 user 0m0.196s 00:05:10.446 sys 0m0.012s 00:05:10.446 19:44:58 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.446 19:44:58 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:10.446 ************************************ 00:05:10.446 END TEST env_memory 00:05:10.446 ************************************ 00:05:10.708 19:44:58 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:10.708 19:44:58 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.708 19:44:58 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.708 19:44:58 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.708 ************************************ 00:05:10.708 START TEST env_vtophys 00:05:10.708 ************************************ 00:05:10.708 19:44:58 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:10.708 EAL: lib.eal log level changed from notice to debug 00:05:10.708 EAL: Detected lcore 0 as core 0 on socket 0 00:05:10.708 EAL: Detected lcore 1 as core 1 on socket 0 00:05:10.708 EAL: Detected lcore 2 as core 2 on socket 0 00:05:10.708 EAL: Detected lcore 3 as core 3 on socket 0 00:05:10.708 EAL: Detected lcore 4 as core 4 on socket 0 00:05:10.708 EAL: Detected lcore 5 as core 5 on socket 0 00:05:10.708 EAL: Detected lcore 6 as core 6 on socket 0 00:05:10.708 EAL: Detected lcore 7 as core 7 on socket 0 00:05:10.708 EAL: Detected lcore 8 as core 8 on socket 0 00:05:10.708 EAL: Detected lcore 9 as core 9 on socket 0 00:05:10.708 EAL: Detected lcore 10 as core 10 on socket 0 00:05:10.708 EAL: Detected lcore 11 as core 11 on socket 0 00:05:10.708 EAL: Detected lcore 12 as core 12 on socket 0 00:05:10.708 EAL: Detected lcore 13 as core 13 on socket 0 00:05:10.708 EAL: Detected lcore 14 as core 14 on socket 0 00:05:10.708 EAL: Detected lcore 15 as core 15 on socket 0 00:05:10.708 EAL: Detected lcore 16 as core 16 on socket 0 00:05:10.708 EAL: Detected lcore 17 as core 17 on socket 0 00:05:10.708 EAL: Detected lcore 18 as core 18 on socket 0 00:05:10.708 EAL: Detected lcore 19 as core 19 on socket 0 00:05:10.708 EAL: Detected lcore 20 as core 20 on socket 0 00:05:10.708 EAL: Detected lcore 21 as core 21 on socket 0 00:05:10.708 EAL: Detected lcore 22 as core 22 on socket 0 00:05:10.708 EAL: Detected lcore 23 as core 23 on socket 0 00:05:10.708 EAL: Detected lcore 24 as core 24 on socket 0 00:05:10.708 EAL: Detected lcore 25 as core 25 on socket 0 00:05:10.708 EAL: Detected lcore 26 as core 26 on socket 0 00:05:10.708 EAL: Detected lcore 27 as core 27 on socket 0 00:05:10.708 EAL: Detected lcore 28 as core 28 on socket 0 00:05:10.708 EAL: Detected lcore 29 as core 29 on socket 0 00:05:10.708 EAL: Detected lcore 30 as core 30 on socket 0 00:05:10.708 EAL: Detected lcore 31 as core 31 on socket 0 00:05:10.708 EAL: Detected lcore 32 as core 32 on socket 0 00:05:10.708 EAL: Detected lcore 33 as core 33 on socket 0 00:05:10.708 EAL: Detected lcore 34 as core 34 on socket 0 00:05:10.708 EAL: Detected lcore 35 as core 35 on socket 0 00:05:10.708 EAL: Detected lcore 36 as core 0 on socket 1 00:05:10.708 EAL: Detected lcore 37 as core 1 on socket 1 00:05:10.708 EAL: Detected lcore 38 as core 2 on socket 1 00:05:10.708 EAL: Detected lcore 39 as core 3 on socket 1 00:05:10.708 EAL: Detected lcore 40 as core 4 on socket 1 00:05:10.708 EAL: Detected lcore 41 as core 5 on socket 1 00:05:10.708 EAL: Detected lcore 42 as core 6 on socket 1 00:05:10.708 EAL: Detected lcore 43 as core 7 on socket 1 00:05:10.708 EAL: Detected lcore 44 as core 8 on socket 1 00:05:10.708 EAL: Detected lcore 45 as core 9 on socket 1 00:05:10.708 EAL: Detected lcore 46 as core 10 on socket 1 00:05:10.708 EAL: Detected lcore 47 as core 11 on socket 1 00:05:10.708 EAL: Detected lcore 48 as core 12 on socket 1 00:05:10.708 EAL: Detected lcore 49 as core 13 on socket 1 00:05:10.708 EAL: Detected lcore 50 as core 14 on socket 1 00:05:10.708 EAL: Detected lcore 51 as core 15 on socket 1 00:05:10.708 EAL: Detected lcore 52 as core 16 on socket 1 00:05:10.708 EAL: Detected lcore 53 as core 17 on socket 1 00:05:10.708 EAL: Detected lcore 54 as core 18 on socket 1 00:05:10.708 EAL: Detected lcore 55 as core 19 on socket 1 00:05:10.708 EAL: Detected lcore 56 as core 20 on socket 1 00:05:10.708 EAL: Detected lcore 57 as core 21 on socket 1 00:05:10.708 EAL: Detected lcore 58 as core 22 on socket 1 00:05:10.708 EAL: Detected lcore 59 as core 23 on socket 1 00:05:10.708 EAL: Detected lcore 60 as core 24 on socket 1 00:05:10.708 EAL: Detected lcore 61 as core 25 on socket 1 00:05:10.708 EAL: Detected lcore 62 as core 26 on socket 1 00:05:10.708 EAL: Detected lcore 63 as core 27 on socket 1 00:05:10.708 EAL: Detected lcore 64 as core 28 on socket 1 00:05:10.708 EAL: Detected lcore 65 as core 29 on socket 1 00:05:10.708 EAL: Detected lcore 66 as core 30 on socket 1 00:05:10.708 EAL: Detected lcore 67 as core 31 on socket 1 00:05:10.708 EAL: Detected lcore 68 as core 32 on socket 1 00:05:10.708 EAL: Detected lcore 69 as core 33 on socket 1 00:05:10.708 EAL: Detected lcore 70 as core 34 on socket 1 00:05:10.708 EAL: Detected lcore 71 as core 35 on socket 1 00:05:10.708 EAL: Detected lcore 72 as core 0 on socket 0 00:05:10.708 EAL: Detected lcore 73 as core 1 on socket 0 00:05:10.708 EAL: Detected lcore 74 as core 2 on socket 0 00:05:10.708 EAL: Detected lcore 75 as core 3 on socket 0 00:05:10.708 EAL: Detected lcore 76 as core 4 on socket 0 00:05:10.708 EAL: Detected lcore 77 as core 5 on socket 0 00:05:10.708 EAL: Detected lcore 78 as core 6 on socket 0 00:05:10.708 EAL: Detected lcore 79 as core 7 on socket 0 00:05:10.708 EAL: Detected lcore 80 as core 8 on socket 0 00:05:10.708 EAL: Detected lcore 81 as core 9 on socket 0 00:05:10.708 EAL: Detected lcore 82 as core 10 on socket 0 00:05:10.708 EAL: Detected lcore 83 as core 11 on socket 0 00:05:10.708 EAL: Detected lcore 84 as core 12 on socket 0 00:05:10.708 EAL: Detected lcore 85 as core 13 on socket 0 00:05:10.708 EAL: Detected lcore 86 as core 14 on socket 0 00:05:10.708 EAL: Detected lcore 87 as core 15 on socket 0 00:05:10.708 EAL: Detected lcore 88 as core 16 on socket 0 00:05:10.708 EAL: Detected lcore 89 as core 17 on socket 0 00:05:10.708 EAL: Detected lcore 90 as core 18 on socket 0 00:05:10.708 EAL: Detected lcore 91 as core 19 on socket 0 00:05:10.708 EAL: Detected lcore 92 as core 20 on socket 0 00:05:10.708 EAL: Detected lcore 93 as core 21 on socket 0 00:05:10.708 EAL: Detected lcore 94 as core 22 on socket 0 00:05:10.708 EAL: Detected lcore 95 as core 23 on socket 0 00:05:10.708 EAL: Detected lcore 96 as core 24 on socket 0 00:05:10.708 EAL: Detected lcore 97 as core 25 on socket 0 00:05:10.708 EAL: Detected lcore 98 as core 26 on socket 0 00:05:10.708 EAL: Detected lcore 99 as core 27 on socket 0 00:05:10.708 EAL: Detected lcore 100 as core 28 on socket 0 00:05:10.708 EAL: Detected lcore 101 as core 29 on socket 0 00:05:10.708 EAL: Detected lcore 102 as core 30 on socket 0 00:05:10.708 EAL: Detected lcore 103 as core 31 on socket 0 00:05:10.708 EAL: Detected lcore 104 as core 32 on socket 0 00:05:10.708 EAL: Detected lcore 105 as core 33 on socket 0 00:05:10.708 EAL: Detected lcore 106 as core 34 on socket 0 00:05:10.708 EAL: Detected lcore 107 as core 35 on socket 0 00:05:10.708 EAL: Detected lcore 108 as core 0 on socket 1 00:05:10.708 EAL: Detected lcore 109 as core 1 on socket 1 00:05:10.708 EAL: Detected lcore 110 as core 2 on socket 1 00:05:10.708 EAL: Detected lcore 111 as core 3 on socket 1 00:05:10.708 EAL: Detected lcore 112 as core 4 on socket 1 00:05:10.708 EAL: Detected lcore 113 as core 5 on socket 1 00:05:10.708 EAL: Detected lcore 114 as core 6 on socket 1 00:05:10.708 EAL: Detected lcore 115 as core 7 on socket 1 00:05:10.708 EAL: Detected lcore 116 as core 8 on socket 1 00:05:10.708 EAL: Detected lcore 117 as core 9 on socket 1 00:05:10.708 EAL: Detected lcore 118 as core 10 on socket 1 00:05:10.708 EAL: Detected lcore 119 as core 11 on socket 1 00:05:10.708 EAL: Detected lcore 120 as core 12 on socket 1 00:05:10.708 EAL: Detected lcore 121 as core 13 on socket 1 00:05:10.708 EAL: Detected lcore 122 as core 14 on socket 1 00:05:10.709 EAL: Detected lcore 123 as core 15 on socket 1 00:05:10.709 EAL: Detected lcore 124 as core 16 on socket 1 00:05:10.709 EAL: Detected lcore 125 as core 17 on socket 1 00:05:10.709 EAL: Detected lcore 126 as core 18 on socket 1 00:05:10.709 EAL: Detected lcore 127 as core 19 on socket 1 00:05:10.709 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:10.709 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:10.709 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:10.709 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:10.709 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:10.709 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:10.709 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:10.709 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:10.709 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:10.709 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:10.709 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:10.709 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:10.709 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:10.709 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:10.709 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:10.709 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:10.709 EAL: Maximum logical cores by configuration: 128 00:05:10.709 EAL: Detected CPU lcores: 128 00:05:10.709 EAL: Detected NUMA nodes: 2 00:05:10.709 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:10.709 EAL: Detected shared linkage of DPDK 00:05:10.709 EAL: No shared files mode enabled, IPC will be disabled 00:05:10.709 EAL: Bus pci wants IOVA as 'DC' 00:05:10.709 EAL: Buses did not request a specific IOVA mode. 00:05:10.709 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:10.709 EAL: Selected IOVA mode 'VA' 00:05:10.709 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.709 EAL: Probing VFIO support... 00:05:10.709 EAL: IOMMU type 1 (Type 1) is supported 00:05:10.709 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:10.709 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:10.709 EAL: VFIO support initialized 00:05:10.709 EAL: Ask a virtual area of 0x2e000 bytes 00:05:10.709 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:10.709 EAL: Setting up physically contiguous memory... 00:05:10.709 EAL: Setting maximum number of open files to 524288 00:05:10.709 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:10.709 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:10.709 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:10.709 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.709 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:10.709 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:10.709 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.709 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:10.709 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:10.709 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.709 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:10.709 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:10.709 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.709 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:10.709 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:10.709 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.709 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:10.709 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:10.709 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.709 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:10.709 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:10.709 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.709 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:10.709 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:10.709 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.709 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:10.709 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:10.709 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:10.709 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.709 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:10.709 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:10.709 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.709 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:10.709 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:10.709 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.709 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:10.709 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:10.709 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.709 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:10.709 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:10.709 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.709 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:10.709 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:10.709 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.709 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:10.709 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:10.709 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.709 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:10.709 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:10.709 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.709 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:10.709 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:10.709 EAL: Hugepages will be freed exactly as allocated. 00:05:10.709 EAL: No shared files mode enabled, IPC is disabled 00:05:10.709 EAL: No shared files mode enabled, IPC is disabled 00:05:10.709 EAL: TSC frequency is ~2400000 KHz 00:05:10.709 EAL: Main lcore 0 is ready (tid=7f96a95f7a00;cpuset=[0]) 00:05:10.709 EAL: Trying to obtain current memory policy. 00:05:10.709 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.709 EAL: Restoring previous memory policy: 0 00:05:10.709 EAL: request: mp_malloc_sync 00:05:10.709 EAL: No shared files mode enabled, IPC is disabled 00:05:10.709 EAL: Heap on socket 0 was expanded by 2MB 00:05:10.709 EAL: No shared files mode enabled, IPC is disabled 00:05:10.709 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:10.709 EAL: Mem event callback 'spdk:(nil)' registered 00:05:10.709 00:05:10.709 00:05:10.709 CUnit - A unit testing framework for C - Version 2.1-3 00:05:10.709 http://cunit.sourceforge.net/ 00:05:10.709 00:05:10.709 00:05:10.709 Suite: components_suite 00:05:10.709 Test: vtophys_malloc_test ...passed 00:05:10.709 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:10.709 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.709 EAL: Restoring previous memory policy: 4 00:05:10.709 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.709 EAL: request: mp_malloc_sync 00:05:10.709 EAL: No shared files mode enabled, IPC is disabled 00:05:10.709 EAL: Heap on socket 0 was expanded by 4MB 00:05:10.709 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.709 EAL: request: mp_malloc_sync 00:05:10.709 EAL: No shared files mode enabled, IPC is disabled 00:05:10.709 EAL: Heap on socket 0 was shrunk by 4MB 00:05:10.709 EAL: Trying to obtain current memory policy. 00:05:10.709 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.709 EAL: Restoring previous memory policy: 4 00:05:10.709 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.709 EAL: request: mp_malloc_sync 00:05:10.709 EAL: No shared files mode enabled, IPC is disabled 00:05:10.709 EAL: Heap on socket 0 was expanded by 6MB 00:05:10.709 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.709 EAL: request: mp_malloc_sync 00:05:10.709 EAL: No shared files mode enabled, IPC is disabled 00:05:10.709 EAL: Heap on socket 0 was shrunk by 6MB 00:05:10.709 EAL: Trying to obtain current memory policy. 00:05:10.709 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.709 EAL: Restoring previous memory policy: 4 00:05:10.709 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.709 EAL: request: mp_malloc_sync 00:05:10.709 EAL: No shared files mode enabled, IPC is disabled 00:05:10.709 EAL: Heap on socket 0 was expanded by 10MB 00:05:10.709 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.709 EAL: request: mp_malloc_sync 00:05:10.709 EAL: No shared files mode enabled, IPC is disabled 00:05:10.709 EAL: Heap on socket 0 was shrunk by 10MB 00:05:10.709 EAL: Trying to obtain current memory policy. 00:05:10.709 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.709 EAL: Restoring previous memory policy: 4 00:05:10.709 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.709 EAL: request: mp_malloc_sync 00:05:10.709 EAL: No shared files mode enabled, IPC is disabled 00:05:10.709 EAL: Heap on socket 0 was expanded by 18MB 00:05:10.709 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.709 EAL: request: mp_malloc_sync 00:05:10.710 EAL: No shared files mode enabled, IPC is disabled 00:05:10.710 EAL: Heap on socket 0 was shrunk by 18MB 00:05:10.710 EAL: Trying to obtain current memory policy. 00:05:10.710 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.710 EAL: Restoring previous memory policy: 4 00:05:10.710 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.710 EAL: request: mp_malloc_sync 00:05:10.710 EAL: No shared files mode enabled, IPC is disabled 00:05:10.710 EAL: Heap on socket 0 was expanded by 34MB 00:05:10.710 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.710 EAL: request: mp_malloc_sync 00:05:10.710 EAL: No shared files mode enabled, IPC is disabled 00:05:10.710 EAL: Heap on socket 0 was shrunk by 34MB 00:05:10.710 EAL: Trying to obtain current memory policy. 00:05:10.710 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.710 EAL: Restoring previous memory policy: 4 00:05:10.710 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.710 EAL: request: mp_malloc_sync 00:05:10.710 EAL: No shared files mode enabled, IPC is disabled 00:05:10.710 EAL: Heap on socket 0 was expanded by 66MB 00:05:10.710 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.710 EAL: request: mp_malloc_sync 00:05:10.710 EAL: No shared files mode enabled, IPC is disabled 00:05:10.710 EAL: Heap on socket 0 was shrunk by 66MB 00:05:10.710 EAL: Trying to obtain current memory policy. 00:05:10.710 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.710 EAL: Restoring previous memory policy: 4 00:05:10.710 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.710 EAL: request: mp_malloc_sync 00:05:10.710 EAL: No shared files mode enabled, IPC is disabled 00:05:10.710 EAL: Heap on socket 0 was expanded by 130MB 00:05:10.710 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.710 EAL: request: mp_malloc_sync 00:05:10.710 EAL: No shared files mode enabled, IPC is disabled 00:05:10.710 EAL: Heap on socket 0 was shrunk by 130MB 00:05:10.710 EAL: Trying to obtain current memory policy. 00:05:10.710 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.710 EAL: Restoring previous memory policy: 4 00:05:10.710 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.710 EAL: request: mp_malloc_sync 00:05:10.710 EAL: No shared files mode enabled, IPC is disabled 00:05:10.710 EAL: Heap on socket 0 was expanded by 258MB 00:05:10.971 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.971 EAL: request: mp_malloc_sync 00:05:10.971 EAL: No shared files mode enabled, IPC is disabled 00:05:10.971 EAL: Heap on socket 0 was shrunk by 258MB 00:05:10.971 EAL: Trying to obtain current memory policy. 00:05:10.971 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.971 EAL: Restoring previous memory policy: 4 00:05:10.971 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.971 EAL: request: mp_malloc_sync 00:05:10.971 EAL: No shared files mode enabled, IPC is disabled 00:05:10.971 EAL: Heap on socket 0 was expanded by 514MB 00:05:10.971 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.971 EAL: request: mp_malloc_sync 00:05:10.971 EAL: No shared files mode enabled, IPC is disabled 00:05:10.971 EAL: Heap on socket 0 was shrunk by 514MB 00:05:10.971 EAL: Trying to obtain current memory policy. 00:05:10.971 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.231 EAL: Restoring previous memory policy: 4 00:05:11.231 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.231 EAL: request: mp_malloc_sync 00:05:11.231 EAL: No shared files mode enabled, IPC is disabled 00:05:11.231 EAL: Heap on socket 0 was expanded by 1026MB 00:05:11.231 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.492 EAL: request: mp_malloc_sync 00:05:11.492 EAL: No shared files mode enabled, IPC is disabled 00:05:11.492 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:11.492 passed 00:05:11.492 00:05:11.492 Run Summary: Type Total Ran Passed Failed Inactive 00:05:11.492 suites 1 1 n/a 0 0 00:05:11.492 tests 2 2 2 0 0 00:05:11.492 asserts 497 497 497 0 n/a 00:05:11.492 00:05:11.492 Elapsed time = 0.655 seconds 00:05:11.492 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.492 EAL: request: mp_malloc_sync 00:05:11.492 EAL: No shared files mode enabled, IPC is disabled 00:05:11.492 EAL: Heap on socket 0 was shrunk by 2MB 00:05:11.492 EAL: No shared files mode enabled, IPC is disabled 00:05:11.492 EAL: No shared files mode enabled, IPC is disabled 00:05:11.492 EAL: No shared files mode enabled, IPC is disabled 00:05:11.492 00:05:11.492 real 0m0.776s 00:05:11.492 user 0m0.405s 00:05:11.492 sys 0m0.345s 00:05:11.492 19:44:59 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.492 19:44:59 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:11.492 ************************************ 00:05:11.492 END TEST env_vtophys 00:05:11.492 ************************************ 00:05:11.492 19:44:59 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:11.492 19:44:59 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.492 19:44:59 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.492 19:44:59 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.492 ************************************ 00:05:11.492 START TEST env_pci 00:05:11.492 ************************************ 00:05:11.493 19:44:59 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:11.493 00:05:11.493 00:05:11.493 CUnit - A unit testing framework for C - Version 2.1-3 00:05:11.493 http://cunit.sourceforge.net/ 00:05:11.493 00:05:11.493 00:05:11.493 Suite: pci 00:05:11.493 Test: pci_hook ...[2024-07-24 19:44:59.300690] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3456153 has claimed it 00:05:11.493 EAL: Cannot find device (10000:00:01.0) 00:05:11.493 EAL: Failed to attach device on primary process 00:05:11.493 passed 00:05:11.493 00:05:11.493 Run Summary: Type Total Ran Passed Failed Inactive 00:05:11.493 suites 1 1 n/a 0 0 00:05:11.493 tests 1 1 1 0 0 00:05:11.493 asserts 25 25 25 0 n/a 00:05:11.493 00:05:11.493 Elapsed time = 0.030 seconds 00:05:11.493 00:05:11.493 real 0m0.050s 00:05:11.493 user 0m0.016s 00:05:11.493 sys 0m0.034s 00:05:11.493 19:44:59 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.493 19:44:59 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:11.493 ************************************ 00:05:11.493 END TEST env_pci 00:05:11.493 ************************************ 00:05:11.493 19:44:59 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:11.493 19:44:59 env -- env/env.sh@15 -- # uname 00:05:11.493 19:44:59 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:11.493 19:44:59 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:11.493 19:44:59 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:11.493 19:44:59 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:11.493 19:44:59 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.493 19:44:59 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.493 ************************************ 00:05:11.493 START TEST env_dpdk_post_init 00:05:11.493 ************************************ 00:05:11.493 19:44:59 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:11.493 EAL: Detected CPU lcores: 128 00:05:11.493 EAL: Detected NUMA nodes: 2 00:05:11.493 EAL: Detected shared linkage of DPDK 00:05:11.493 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:11.753 EAL: Selected IOVA mode 'VA' 00:05:11.753 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.753 EAL: VFIO support initialized 00:05:11.753 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:11.753 EAL: Using IOMMU type 1 (Type 1) 00:05:11.753 EAL: Ignore mapping IO port bar(1) 00:05:12.014 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:12.014 EAL: Ignore mapping IO port bar(1) 00:05:12.275 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:12.275 EAL: Ignore mapping IO port bar(1) 00:05:12.275 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:12.536 EAL: Ignore mapping IO port bar(1) 00:05:12.536 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:12.797 EAL: Ignore mapping IO port bar(1) 00:05:12.797 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:13.058 EAL: Ignore mapping IO port bar(1) 00:05:13.058 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:13.058 EAL: Ignore mapping IO port bar(1) 00:05:13.319 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:13.319 EAL: Ignore mapping IO port bar(1) 00:05:13.579 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:13.840 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:13.840 EAL: Ignore mapping IO port bar(1) 00:05:13.840 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:14.101 EAL: Ignore mapping IO port bar(1) 00:05:14.101 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:14.363 EAL: Ignore mapping IO port bar(1) 00:05:14.363 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:14.624 EAL: Ignore mapping IO port bar(1) 00:05:14.624 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:14.624 EAL: Ignore mapping IO port bar(1) 00:05:14.885 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:14.885 EAL: Ignore mapping IO port bar(1) 00:05:15.146 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:15.146 EAL: Ignore mapping IO port bar(1) 00:05:15.407 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:15.407 EAL: Ignore mapping IO port bar(1) 00:05:15.407 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:15.407 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:15.407 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:15.667 Starting DPDK initialization... 00:05:15.667 Starting SPDK post initialization... 00:05:15.667 SPDK NVMe probe 00:05:15.667 Attaching to 0000:65:00.0 00:05:15.667 Attached to 0000:65:00.0 00:05:15.667 Cleaning up... 00:05:17.581 00:05:17.581 real 0m5.726s 00:05:17.581 user 0m0.178s 00:05:17.581 sys 0m0.091s 00:05:17.581 19:45:05 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.581 19:45:05 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:17.581 ************************************ 00:05:17.581 END TEST env_dpdk_post_init 00:05:17.581 ************************************ 00:05:17.581 19:45:05 env -- env/env.sh@26 -- # uname 00:05:17.581 19:45:05 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:17.581 19:45:05 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:17.581 19:45:05 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:17.581 19:45:05 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.581 19:45:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:17.581 ************************************ 00:05:17.581 START TEST env_mem_callbacks 00:05:17.581 ************************************ 00:05:17.581 19:45:05 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:17.581 EAL: Detected CPU lcores: 128 00:05:17.581 EAL: Detected NUMA nodes: 2 00:05:17.581 EAL: Detected shared linkage of DPDK 00:05:17.581 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:17.581 EAL: Selected IOVA mode 'VA' 00:05:17.581 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.581 EAL: VFIO support initialized 00:05:17.581 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:17.581 00:05:17.581 00:05:17.581 CUnit - A unit testing framework for C - Version 2.1-3 00:05:17.581 http://cunit.sourceforge.net/ 00:05:17.581 00:05:17.581 00:05:17.581 Suite: memory 00:05:17.581 Test: test ... 00:05:17.581 register 0x200000200000 2097152 00:05:17.581 malloc 3145728 00:05:17.581 register 0x200000400000 4194304 00:05:17.581 buf 0x200000500000 len 3145728 PASSED 00:05:17.581 malloc 64 00:05:17.581 buf 0x2000004fff40 len 64 PASSED 00:05:17.581 malloc 4194304 00:05:17.581 register 0x200000800000 6291456 00:05:17.581 buf 0x200000a00000 len 4194304 PASSED 00:05:17.581 free 0x200000500000 3145728 00:05:17.581 free 0x2000004fff40 64 00:05:17.581 unregister 0x200000400000 4194304 PASSED 00:05:17.582 free 0x200000a00000 4194304 00:05:17.582 unregister 0x200000800000 6291456 PASSED 00:05:17.582 malloc 8388608 00:05:17.582 register 0x200000400000 10485760 00:05:17.582 buf 0x200000600000 len 8388608 PASSED 00:05:17.582 free 0x200000600000 8388608 00:05:17.582 unregister 0x200000400000 10485760 PASSED 00:05:17.582 passed 00:05:17.582 00:05:17.582 Run Summary: Type Total Ran Passed Failed Inactive 00:05:17.582 suites 1 1 n/a 0 0 00:05:17.582 tests 1 1 1 0 0 00:05:17.582 asserts 15 15 15 0 n/a 00:05:17.582 00:05:17.582 Elapsed time = 0.004 seconds 00:05:17.582 00:05:17.582 real 0m0.032s 00:05:17.582 user 0m0.010s 00:05:17.582 sys 0m0.022s 00:05:17.582 19:45:05 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.582 19:45:05 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:17.582 ************************************ 00:05:17.582 END TEST env_mem_callbacks 00:05:17.582 ************************************ 00:05:17.582 00:05:17.582 real 0m7.243s 00:05:17.582 user 0m0.967s 00:05:17.582 sys 0m0.820s 00:05:17.582 19:45:05 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.582 19:45:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:17.582 ************************************ 00:05:17.582 END TEST env 00:05:17.582 ************************************ 00:05:17.582 19:45:05 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:17.582 19:45:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:17.582 19:45:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.582 19:45:05 -- common/autotest_common.sh@10 -- # set +x 00:05:17.582 ************************************ 00:05:17.582 START TEST rpc 00:05:17.582 ************************************ 00:05:17.582 19:45:05 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:17.582 * Looking for test storage... 00:05:17.582 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:17.582 19:45:05 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3457705 00:05:17.582 19:45:05 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:17.582 19:45:05 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:17.582 19:45:05 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3457705 00:05:17.582 19:45:05 rpc -- common/autotest_common.sh@831 -- # '[' -z 3457705 ']' 00:05:17.582 19:45:05 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.582 19:45:05 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:17.582 19:45:05 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.582 19:45:05 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:17.582 19:45:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.582 [2024-07-24 19:45:05.507565] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:05:17.582 [2024-07-24 19:45:05.507614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3457705 ] 00:05:17.582 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.850 [2024-07-24 19:45:05.567428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.850 [2024-07-24 19:45:05.633531] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:17.850 [2024-07-24 19:45:05.633566] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3457705' to capture a snapshot of events at runtime. 00:05:17.850 [2024-07-24 19:45:05.633573] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:17.850 [2024-07-24 19:45:05.633580] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:17.850 [2024-07-24 19:45:05.633585] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3457705 for offline analysis/debug. 00:05:17.850 [2024-07-24 19:45:05.633605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.476 19:45:06 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:18.476 19:45:06 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:18.476 19:45:06 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:18.476 19:45:06 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:18.476 19:45:06 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:18.476 19:45:06 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:18.476 19:45:06 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.476 19:45:06 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.476 19:45:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.476 ************************************ 00:05:18.476 START TEST rpc_integrity 00:05:18.476 ************************************ 00:05:18.476 19:45:06 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:18.476 19:45:06 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:18.476 19:45:06 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.476 19:45:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.476 19:45:06 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.476 19:45:06 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:18.476 19:45:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:18.476 19:45:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:18.476 19:45:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:18.476 19:45:06 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.476 19:45:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.476 19:45:06 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.476 19:45:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:18.477 19:45:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:18.477 19:45:06 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.477 19:45:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.477 19:45:06 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.477 19:45:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:18.477 { 00:05:18.477 "name": "Malloc0", 00:05:18.477 "aliases": [ 00:05:18.477 "e1328a15-374c-4f14-a39b-04b8da5214ab" 00:05:18.477 ], 00:05:18.477 "product_name": "Malloc disk", 00:05:18.477 "block_size": 512, 00:05:18.477 "num_blocks": 16384, 00:05:18.477 "uuid": "e1328a15-374c-4f14-a39b-04b8da5214ab", 00:05:18.477 "assigned_rate_limits": { 00:05:18.477 "rw_ios_per_sec": 0, 00:05:18.477 "rw_mbytes_per_sec": 0, 00:05:18.477 "r_mbytes_per_sec": 0, 00:05:18.477 "w_mbytes_per_sec": 0 00:05:18.477 }, 00:05:18.477 "claimed": false, 00:05:18.477 "zoned": false, 00:05:18.477 "supported_io_types": { 00:05:18.477 "read": true, 00:05:18.477 "write": true, 00:05:18.477 "unmap": true, 00:05:18.477 "flush": true, 00:05:18.477 "reset": true, 00:05:18.477 "nvme_admin": false, 00:05:18.477 "nvme_io": false, 00:05:18.477 "nvme_io_md": false, 00:05:18.477 "write_zeroes": true, 00:05:18.477 "zcopy": true, 00:05:18.477 "get_zone_info": false, 00:05:18.477 "zone_management": false, 00:05:18.477 "zone_append": false, 00:05:18.477 "compare": false, 00:05:18.477 "compare_and_write": false, 00:05:18.477 "abort": true, 00:05:18.477 "seek_hole": false, 00:05:18.477 "seek_data": false, 00:05:18.477 "copy": true, 00:05:18.477 "nvme_iov_md": false 00:05:18.477 }, 00:05:18.477 "memory_domains": [ 00:05:18.477 { 00:05:18.477 "dma_device_id": "system", 00:05:18.477 "dma_device_type": 1 00:05:18.477 }, 00:05:18.477 { 00:05:18.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.477 "dma_device_type": 2 00:05:18.477 } 00:05:18.477 ], 00:05:18.477 "driver_specific": {} 00:05:18.477 } 00:05:18.477 ]' 00:05:18.477 19:45:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:18.477 19:45:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:18.477 19:45:06 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:18.477 19:45:06 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.477 19:45:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.738 [2024-07-24 19:45:06.433258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:18.738 [2024-07-24 19:45:06.433289] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:18.738 [2024-07-24 19:45:06.433301] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2050d80 00:05:18.738 [2024-07-24 19:45:06.433309] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:18.738 [2024-07-24 19:45:06.434648] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:18.738 [2024-07-24 19:45:06.434670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:18.738 Passthru0 00:05:18.738 19:45:06 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.738 19:45:06 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:18.738 19:45:06 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.738 19:45:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.738 19:45:06 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.738 19:45:06 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:18.738 { 00:05:18.738 "name": "Malloc0", 00:05:18.738 "aliases": [ 00:05:18.738 "e1328a15-374c-4f14-a39b-04b8da5214ab" 00:05:18.738 ], 00:05:18.738 "product_name": "Malloc disk", 00:05:18.738 "block_size": 512, 00:05:18.738 "num_blocks": 16384, 00:05:18.738 "uuid": "e1328a15-374c-4f14-a39b-04b8da5214ab", 00:05:18.738 "assigned_rate_limits": { 00:05:18.738 "rw_ios_per_sec": 0, 00:05:18.738 "rw_mbytes_per_sec": 0, 00:05:18.738 "r_mbytes_per_sec": 0, 00:05:18.738 "w_mbytes_per_sec": 0 00:05:18.738 }, 00:05:18.738 "claimed": true, 00:05:18.738 "claim_type": "exclusive_write", 00:05:18.738 "zoned": false, 00:05:18.738 "supported_io_types": { 00:05:18.738 "read": true, 00:05:18.738 "write": true, 00:05:18.738 "unmap": true, 00:05:18.738 "flush": true, 00:05:18.738 "reset": true, 00:05:18.738 "nvme_admin": false, 00:05:18.738 "nvme_io": false, 00:05:18.738 "nvme_io_md": false, 00:05:18.738 "write_zeroes": true, 00:05:18.738 "zcopy": true, 00:05:18.738 "get_zone_info": false, 00:05:18.738 "zone_management": false, 00:05:18.738 "zone_append": false, 00:05:18.738 "compare": false, 00:05:18.738 "compare_and_write": false, 00:05:18.738 "abort": true, 00:05:18.738 "seek_hole": false, 00:05:18.738 "seek_data": false, 00:05:18.738 "copy": true, 00:05:18.738 "nvme_iov_md": false 00:05:18.738 }, 00:05:18.738 "memory_domains": [ 00:05:18.738 { 00:05:18.738 "dma_device_id": "system", 00:05:18.738 "dma_device_type": 1 00:05:18.738 }, 00:05:18.738 { 00:05:18.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.738 "dma_device_type": 2 00:05:18.738 } 00:05:18.738 ], 00:05:18.738 "driver_specific": {} 00:05:18.738 }, 00:05:18.738 { 00:05:18.738 "name": "Passthru0", 00:05:18.738 "aliases": [ 00:05:18.738 "c8af2ec6-6478-59af-85a3-3a82c5b3e84b" 00:05:18.738 ], 00:05:18.738 "product_name": "passthru", 00:05:18.738 "block_size": 512, 00:05:18.738 "num_blocks": 16384, 00:05:18.738 "uuid": "c8af2ec6-6478-59af-85a3-3a82c5b3e84b", 00:05:18.738 "assigned_rate_limits": { 00:05:18.738 "rw_ios_per_sec": 0, 00:05:18.738 "rw_mbytes_per_sec": 0, 00:05:18.738 "r_mbytes_per_sec": 0, 00:05:18.738 "w_mbytes_per_sec": 0 00:05:18.738 }, 00:05:18.738 "claimed": false, 00:05:18.738 "zoned": false, 00:05:18.738 "supported_io_types": { 00:05:18.738 "read": true, 00:05:18.738 "write": true, 00:05:18.738 "unmap": true, 00:05:18.738 "flush": true, 00:05:18.738 "reset": true, 00:05:18.738 "nvme_admin": false, 00:05:18.738 "nvme_io": false, 00:05:18.738 "nvme_io_md": false, 00:05:18.738 "write_zeroes": true, 00:05:18.738 "zcopy": true, 00:05:18.738 "get_zone_info": false, 00:05:18.738 "zone_management": false, 00:05:18.738 "zone_append": false, 00:05:18.738 "compare": false, 00:05:18.738 "compare_and_write": false, 00:05:18.738 "abort": true, 00:05:18.738 "seek_hole": false, 00:05:18.738 "seek_data": false, 00:05:18.738 "copy": true, 00:05:18.738 "nvme_iov_md": false 00:05:18.738 }, 00:05:18.738 "memory_domains": [ 00:05:18.738 { 00:05:18.738 "dma_device_id": "system", 00:05:18.738 "dma_device_type": 1 00:05:18.738 }, 00:05:18.738 { 00:05:18.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:18.738 "dma_device_type": 2 00:05:18.738 } 00:05:18.738 ], 00:05:18.738 "driver_specific": { 00:05:18.738 "passthru": { 00:05:18.738 "name": "Passthru0", 00:05:18.738 "base_bdev_name": "Malloc0" 00:05:18.738 } 00:05:18.738 } 00:05:18.738 } 00:05:18.738 ]' 00:05:18.738 19:45:06 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:18.738 19:45:06 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:18.738 19:45:06 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:18.738 19:45:06 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.738 19:45:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.739 19:45:06 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.739 19:45:06 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:18.739 19:45:06 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.739 19:45:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.739 19:45:06 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.739 19:45:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:18.739 19:45:06 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.739 19:45:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.739 19:45:06 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.739 19:45:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:18.739 19:45:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:18.739 19:45:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:18.739 00:05:18.739 real 0m0.289s 00:05:18.739 user 0m0.183s 00:05:18.739 sys 0m0.041s 00:05:18.739 19:45:06 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.739 19:45:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:18.739 ************************************ 00:05:18.739 END TEST rpc_integrity 00:05:18.739 ************************************ 00:05:18.739 19:45:06 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:18.739 19:45:06 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.739 19:45:06 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.739 19:45:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.739 ************************************ 00:05:18.739 START TEST rpc_plugins 00:05:18.739 ************************************ 00:05:18.739 19:45:06 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:18.739 19:45:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:18.739 19:45:06 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.739 19:45:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:18.739 19:45:06 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.739 19:45:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:18.739 19:45:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:18.739 19:45:06 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.739 19:45:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:19.000 19:45:06 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.000 19:45:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:19.000 { 00:05:19.000 "name": "Malloc1", 00:05:19.000 "aliases": [ 00:05:19.000 "ce662345-7727-41a5-b1bb-e99987dcce51" 00:05:19.000 ], 00:05:19.000 "product_name": "Malloc disk", 00:05:19.000 "block_size": 4096, 00:05:19.000 "num_blocks": 256, 00:05:19.000 "uuid": "ce662345-7727-41a5-b1bb-e99987dcce51", 00:05:19.000 "assigned_rate_limits": { 00:05:19.000 "rw_ios_per_sec": 0, 00:05:19.000 "rw_mbytes_per_sec": 0, 00:05:19.000 "r_mbytes_per_sec": 0, 00:05:19.000 "w_mbytes_per_sec": 0 00:05:19.000 }, 00:05:19.000 "claimed": false, 00:05:19.000 "zoned": false, 00:05:19.000 "supported_io_types": { 00:05:19.000 "read": true, 00:05:19.000 "write": true, 00:05:19.000 "unmap": true, 00:05:19.000 "flush": true, 00:05:19.000 "reset": true, 00:05:19.000 "nvme_admin": false, 00:05:19.000 "nvme_io": false, 00:05:19.000 "nvme_io_md": false, 00:05:19.000 "write_zeroes": true, 00:05:19.000 "zcopy": true, 00:05:19.000 "get_zone_info": false, 00:05:19.000 "zone_management": false, 00:05:19.000 "zone_append": false, 00:05:19.000 "compare": false, 00:05:19.000 "compare_and_write": false, 00:05:19.000 "abort": true, 00:05:19.000 "seek_hole": false, 00:05:19.000 "seek_data": false, 00:05:19.000 "copy": true, 00:05:19.000 "nvme_iov_md": false 00:05:19.000 }, 00:05:19.000 "memory_domains": [ 00:05:19.000 { 00:05:19.000 "dma_device_id": "system", 00:05:19.000 "dma_device_type": 1 00:05:19.000 }, 00:05:19.000 { 00:05:19.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.000 "dma_device_type": 2 00:05:19.000 } 00:05:19.000 ], 00:05:19.000 "driver_specific": {} 00:05:19.000 } 00:05:19.000 ]' 00:05:19.000 19:45:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:19.000 19:45:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:19.000 19:45:06 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:19.000 19:45:06 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.000 19:45:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:19.000 19:45:06 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.000 19:45:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:19.000 19:45:06 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.000 19:45:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:19.000 19:45:06 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.000 19:45:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:19.000 19:45:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:19.000 19:45:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:19.000 00:05:19.000 real 0m0.151s 00:05:19.000 user 0m0.095s 00:05:19.000 sys 0m0.019s 00:05:19.000 19:45:06 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.000 19:45:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:19.000 ************************************ 00:05:19.000 END TEST rpc_plugins 00:05:19.000 ************************************ 00:05:19.000 19:45:06 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:19.000 19:45:06 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.000 19:45:06 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.000 19:45:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.000 ************************************ 00:05:19.000 START TEST rpc_trace_cmd_test 00:05:19.000 ************************************ 00:05:19.000 19:45:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:19.000 19:45:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:19.000 19:45:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:19.000 19:45:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.000 19:45:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:19.000 19:45:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.001 19:45:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:19.001 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3457705", 00:05:19.001 "tpoint_group_mask": "0x8", 00:05:19.001 "iscsi_conn": { 00:05:19.001 "mask": "0x2", 00:05:19.001 "tpoint_mask": "0x0" 00:05:19.001 }, 00:05:19.001 "scsi": { 00:05:19.001 "mask": "0x4", 00:05:19.001 "tpoint_mask": "0x0" 00:05:19.001 }, 00:05:19.001 "bdev": { 00:05:19.001 "mask": "0x8", 00:05:19.001 "tpoint_mask": "0xffffffffffffffff" 00:05:19.001 }, 00:05:19.001 "nvmf_rdma": { 00:05:19.001 "mask": "0x10", 00:05:19.001 "tpoint_mask": "0x0" 00:05:19.001 }, 00:05:19.001 "nvmf_tcp": { 00:05:19.001 "mask": "0x20", 00:05:19.001 "tpoint_mask": "0x0" 00:05:19.001 }, 00:05:19.001 "ftl": { 00:05:19.001 "mask": "0x40", 00:05:19.001 "tpoint_mask": "0x0" 00:05:19.001 }, 00:05:19.001 "blobfs": { 00:05:19.001 "mask": "0x80", 00:05:19.001 "tpoint_mask": "0x0" 00:05:19.001 }, 00:05:19.001 "dsa": { 00:05:19.001 "mask": "0x200", 00:05:19.001 "tpoint_mask": "0x0" 00:05:19.001 }, 00:05:19.001 "thread": { 00:05:19.001 "mask": "0x400", 00:05:19.001 "tpoint_mask": "0x0" 00:05:19.001 }, 00:05:19.001 "nvme_pcie": { 00:05:19.001 "mask": "0x800", 00:05:19.001 "tpoint_mask": "0x0" 00:05:19.001 }, 00:05:19.001 "iaa": { 00:05:19.001 "mask": "0x1000", 00:05:19.001 "tpoint_mask": "0x0" 00:05:19.001 }, 00:05:19.001 "nvme_tcp": { 00:05:19.001 "mask": "0x2000", 00:05:19.001 "tpoint_mask": "0x0" 00:05:19.001 }, 00:05:19.001 "bdev_nvme": { 00:05:19.001 "mask": "0x4000", 00:05:19.001 "tpoint_mask": "0x0" 00:05:19.001 }, 00:05:19.001 "sock": { 00:05:19.001 "mask": "0x8000", 00:05:19.001 "tpoint_mask": "0x0" 00:05:19.001 } 00:05:19.001 }' 00:05:19.001 19:45:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:19.001 19:45:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:19.001 19:45:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:19.262 19:45:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:19.262 19:45:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:19.262 19:45:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:19.262 19:45:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:19.262 19:45:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:19.262 19:45:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:19.262 19:45:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:19.262 00:05:19.262 real 0m0.248s 00:05:19.262 user 0m0.210s 00:05:19.262 sys 0m0.031s 00:05:19.262 19:45:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.262 19:45:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:19.262 ************************************ 00:05:19.262 END TEST rpc_trace_cmd_test 00:05:19.262 ************************************ 00:05:19.262 19:45:07 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:19.262 19:45:07 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:19.262 19:45:07 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:19.262 19:45:07 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.262 19:45:07 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.262 19:45:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.262 ************************************ 00:05:19.262 START TEST rpc_daemon_integrity 00:05:19.262 ************************************ 00:05:19.262 19:45:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:19.262 19:45:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:19.262 19:45:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.262 19:45:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.523 19:45:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.523 19:45:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:19.523 19:45:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:19.523 19:45:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:19.523 19:45:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:19.523 19:45:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.523 19:45:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.523 19:45:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.523 19:45:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:19.523 19:45:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:19.523 19:45:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.523 19:45:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.523 19:45:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.523 19:45:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:19.523 { 00:05:19.523 "name": "Malloc2", 00:05:19.523 "aliases": [ 00:05:19.523 "4d4096a9-9c78-4234-b65f-98a38c711480" 00:05:19.523 ], 00:05:19.523 "product_name": "Malloc disk", 00:05:19.523 "block_size": 512, 00:05:19.523 "num_blocks": 16384, 00:05:19.523 "uuid": "4d4096a9-9c78-4234-b65f-98a38c711480", 00:05:19.523 "assigned_rate_limits": { 00:05:19.523 "rw_ios_per_sec": 0, 00:05:19.523 "rw_mbytes_per_sec": 0, 00:05:19.523 "r_mbytes_per_sec": 0, 00:05:19.523 "w_mbytes_per_sec": 0 00:05:19.523 }, 00:05:19.523 "claimed": false, 00:05:19.523 "zoned": false, 00:05:19.523 "supported_io_types": { 00:05:19.523 "read": true, 00:05:19.523 "write": true, 00:05:19.523 "unmap": true, 00:05:19.523 "flush": true, 00:05:19.523 "reset": true, 00:05:19.523 "nvme_admin": false, 00:05:19.523 "nvme_io": false, 00:05:19.523 "nvme_io_md": false, 00:05:19.523 "write_zeroes": true, 00:05:19.523 "zcopy": true, 00:05:19.524 "get_zone_info": false, 00:05:19.524 "zone_management": false, 00:05:19.524 "zone_append": false, 00:05:19.524 "compare": false, 00:05:19.524 "compare_and_write": false, 00:05:19.524 "abort": true, 00:05:19.524 "seek_hole": false, 00:05:19.524 "seek_data": false, 00:05:19.524 "copy": true, 00:05:19.524 "nvme_iov_md": false 00:05:19.524 }, 00:05:19.524 "memory_domains": [ 00:05:19.524 { 00:05:19.524 "dma_device_id": "system", 00:05:19.524 "dma_device_type": 1 00:05:19.524 }, 00:05:19.524 { 00:05:19.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.524 "dma_device_type": 2 00:05:19.524 } 00:05:19.524 ], 00:05:19.524 "driver_specific": {} 00:05:19.524 } 00:05:19.524 ]' 00:05:19.524 19:45:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:19.524 19:45:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:19.524 19:45:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:19.524 19:45:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.524 19:45:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.524 [2024-07-24 19:45:07.343736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:19.524 [2024-07-24 19:45:07.343764] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:19.524 [2024-07-24 19:45:07.343778] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2051a90 00:05:19.524 [2024-07-24 19:45:07.343784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:19.524 [2024-07-24 19:45:07.344990] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:19.524 [2024-07-24 19:45:07.345010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:19.524 Passthru0 00:05:19.524 19:45:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.524 19:45:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:19.524 19:45:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.524 19:45:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.524 19:45:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.524 19:45:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:19.524 { 00:05:19.524 "name": "Malloc2", 00:05:19.524 "aliases": [ 00:05:19.524 "4d4096a9-9c78-4234-b65f-98a38c711480" 00:05:19.524 ], 00:05:19.524 "product_name": "Malloc disk", 00:05:19.524 "block_size": 512, 00:05:19.524 "num_blocks": 16384, 00:05:19.524 "uuid": "4d4096a9-9c78-4234-b65f-98a38c711480", 00:05:19.524 "assigned_rate_limits": { 00:05:19.524 "rw_ios_per_sec": 0, 00:05:19.524 "rw_mbytes_per_sec": 0, 00:05:19.524 "r_mbytes_per_sec": 0, 00:05:19.524 "w_mbytes_per_sec": 0 00:05:19.524 }, 00:05:19.524 "claimed": true, 00:05:19.524 "claim_type": "exclusive_write", 00:05:19.524 "zoned": false, 00:05:19.524 "supported_io_types": { 00:05:19.524 "read": true, 00:05:19.524 "write": true, 00:05:19.524 "unmap": true, 00:05:19.524 "flush": true, 00:05:19.524 "reset": true, 00:05:19.524 "nvme_admin": false, 00:05:19.524 "nvme_io": false, 00:05:19.524 "nvme_io_md": false, 00:05:19.524 "write_zeroes": true, 00:05:19.524 "zcopy": true, 00:05:19.524 "get_zone_info": false, 00:05:19.524 "zone_management": false, 00:05:19.524 "zone_append": false, 00:05:19.524 "compare": false, 00:05:19.524 "compare_and_write": false, 00:05:19.524 "abort": true, 00:05:19.524 "seek_hole": false, 00:05:19.524 "seek_data": false, 00:05:19.524 "copy": true, 00:05:19.524 "nvme_iov_md": false 00:05:19.524 }, 00:05:19.524 "memory_domains": [ 00:05:19.524 { 00:05:19.524 "dma_device_id": "system", 00:05:19.524 "dma_device_type": 1 00:05:19.524 }, 00:05:19.524 { 00:05:19.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.524 "dma_device_type": 2 00:05:19.524 } 00:05:19.524 ], 00:05:19.524 "driver_specific": {} 00:05:19.524 }, 00:05:19.524 { 00:05:19.524 "name": "Passthru0", 00:05:19.524 "aliases": [ 00:05:19.524 "dba4d2fc-27c8-5f05-ae17-ccf002c8b227" 00:05:19.524 ], 00:05:19.524 "product_name": "passthru", 00:05:19.524 "block_size": 512, 00:05:19.524 "num_blocks": 16384, 00:05:19.524 "uuid": "dba4d2fc-27c8-5f05-ae17-ccf002c8b227", 00:05:19.524 "assigned_rate_limits": { 00:05:19.524 "rw_ios_per_sec": 0, 00:05:19.524 "rw_mbytes_per_sec": 0, 00:05:19.524 "r_mbytes_per_sec": 0, 00:05:19.524 "w_mbytes_per_sec": 0 00:05:19.524 }, 00:05:19.524 "claimed": false, 00:05:19.524 "zoned": false, 00:05:19.524 "supported_io_types": { 00:05:19.524 "read": true, 00:05:19.524 "write": true, 00:05:19.524 "unmap": true, 00:05:19.524 "flush": true, 00:05:19.524 "reset": true, 00:05:19.524 "nvme_admin": false, 00:05:19.524 "nvme_io": false, 00:05:19.524 "nvme_io_md": false, 00:05:19.524 "write_zeroes": true, 00:05:19.524 "zcopy": true, 00:05:19.524 "get_zone_info": false, 00:05:19.524 "zone_management": false, 00:05:19.524 "zone_append": false, 00:05:19.524 "compare": false, 00:05:19.524 "compare_and_write": false, 00:05:19.524 "abort": true, 00:05:19.524 "seek_hole": false, 00:05:19.524 "seek_data": false, 00:05:19.524 "copy": true, 00:05:19.524 "nvme_iov_md": false 00:05:19.524 }, 00:05:19.524 "memory_domains": [ 00:05:19.524 { 00:05:19.524 "dma_device_id": "system", 00:05:19.524 "dma_device_type": 1 00:05:19.524 }, 00:05:19.524 { 00:05:19.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:19.524 "dma_device_type": 2 00:05:19.524 } 00:05:19.524 ], 00:05:19.524 "driver_specific": { 00:05:19.524 "passthru": { 00:05:19.524 "name": "Passthru0", 00:05:19.524 "base_bdev_name": "Malloc2" 00:05:19.524 } 00:05:19.524 } 00:05:19.524 } 00:05:19.524 ]' 00:05:19.524 19:45:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:19.524 19:45:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:19.524 19:45:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:19.524 19:45:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.524 19:45:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.524 19:45:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.524 19:45:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:19.524 19:45:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.524 19:45:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.524 19:45:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.524 19:45:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:19.524 19:45:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.524 19:45:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.524 19:45:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.524 19:45:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:19.524 19:45:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:19.785 19:45:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:19.785 00:05:19.785 real 0m0.293s 00:05:19.785 user 0m0.193s 00:05:19.785 sys 0m0.039s 00:05:19.785 19:45:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.785 19:45:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:19.785 ************************************ 00:05:19.785 END TEST rpc_daemon_integrity 00:05:19.785 ************************************ 00:05:19.785 19:45:07 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:19.785 19:45:07 rpc -- rpc/rpc.sh@84 -- # killprocess 3457705 00:05:19.785 19:45:07 rpc -- common/autotest_common.sh@950 -- # '[' -z 3457705 ']' 00:05:19.785 19:45:07 rpc -- common/autotest_common.sh@954 -- # kill -0 3457705 00:05:19.785 19:45:07 rpc -- common/autotest_common.sh@955 -- # uname 00:05:19.785 19:45:07 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:19.785 19:45:07 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3457705 00:05:19.785 19:45:07 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:19.785 19:45:07 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:19.785 19:45:07 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3457705' 00:05:19.785 killing process with pid 3457705 00:05:19.785 19:45:07 rpc -- common/autotest_common.sh@969 -- # kill 3457705 00:05:19.785 19:45:07 rpc -- common/autotest_common.sh@974 -- # wait 3457705 00:05:20.046 00:05:20.046 real 0m2.445s 00:05:20.047 user 0m3.244s 00:05:20.047 sys 0m0.661s 00:05:20.047 19:45:07 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.047 19:45:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.047 ************************************ 00:05:20.047 END TEST rpc 00:05:20.047 ************************************ 00:05:20.047 19:45:07 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:20.047 19:45:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.047 19:45:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.047 19:45:07 -- common/autotest_common.sh@10 -- # set +x 00:05:20.047 ************************************ 00:05:20.047 START TEST skip_rpc 00:05:20.047 ************************************ 00:05:20.047 19:45:07 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:20.047 * Looking for test storage... 00:05:20.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:20.047 19:45:07 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:20.047 19:45:07 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:20.047 19:45:07 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:20.047 19:45:07 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.047 19:45:07 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.047 19:45:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.308 ************************************ 00:05:20.308 START TEST skip_rpc 00:05:20.308 ************************************ 00:05:20.308 19:45:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:20.308 19:45:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3458233 00:05:20.308 19:45:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:20.308 19:45:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:20.308 19:45:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:20.308 [2024-07-24 19:45:08.069136] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:05:20.308 [2024-07-24 19:45:08.069198] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3458233 ] 00:05:20.308 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.308 [2024-07-24 19:45:08.132437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.308 [2024-07-24 19:45:08.197287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.612 19:45:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:25.612 19:45:13 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:25.612 19:45:13 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:25.612 19:45:13 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:25.612 19:45:13 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:25.612 19:45:13 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:25.612 19:45:13 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:25.612 19:45:13 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:25.612 19:45:13 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.612 19:45:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.612 19:45:13 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:25.612 19:45:13 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:25.612 19:45:13 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:25.612 19:45:13 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:25.612 19:45:13 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:25.612 19:45:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:25.612 19:45:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3458233 00:05:25.612 19:45:13 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 3458233 ']' 00:05:25.612 19:45:13 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 3458233 00:05:25.612 19:45:13 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:25.612 19:45:13 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:25.612 19:45:13 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3458233 00:05:25.612 19:45:13 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:25.612 19:45:13 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:25.612 19:45:13 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3458233' 00:05:25.612 killing process with pid 3458233 00:05:25.612 19:45:13 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 3458233 00:05:25.612 19:45:13 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 3458233 00:05:25.612 00:05:25.612 real 0m5.281s 00:05:25.612 user 0m5.086s 00:05:25.612 sys 0m0.228s 00:05:25.612 19:45:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.612 19:45:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.612 ************************************ 00:05:25.612 END TEST skip_rpc 00:05:25.612 ************************************ 00:05:25.612 19:45:13 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:25.612 19:45:13 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:25.612 19:45:13 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.612 19:45:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.612 ************************************ 00:05:25.612 START TEST skip_rpc_with_json 00:05:25.612 ************************************ 00:05:25.612 19:45:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:25.612 19:45:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:25.612 19:45:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3459897 00:05:25.612 19:45:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:25.612 19:45:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3459897 00:05:25.612 19:45:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:25.612 19:45:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 3459897 ']' 00:05:25.612 19:45:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.612 19:45:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:25.612 19:45:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.612 19:45:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:25.612 19:45:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:25.612 [2024-07-24 19:45:13.423622] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:05:25.612 [2024-07-24 19:45:13.423673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3459897 ] 00:05:25.612 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.612 [2024-07-24 19:45:13.483825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.612 [2024-07-24 19:45:13.551976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.561 19:45:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:26.561 19:45:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:26.561 19:45:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:26.561 19:45:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.561 19:45:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:26.561 [2024-07-24 19:45:14.182868] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:26.561 request: 00:05:26.561 { 00:05:26.561 "trtype": "tcp", 00:05:26.561 "method": "nvmf_get_transports", 00:05:26.561 "req_id": 1 00:05:26.561 } 00:05:26.561 Got JSON-RPC error response 00:05:26.561 response: 00:05:26.561 { 00:05:26.561 "code": -19, 00:05:26.561 "message": "No such device" 00:05:26.561 } 00:05:26.561 19:45:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:26.561 19:45:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:26.561 19:45:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.561 19:45:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:26.561 [2024-07-24 19:45:14.194995] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:26.561 19:45:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.561 19:45:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:26.561 19:45:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.561 19:45:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:26.561 19:45:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.561 19:45:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:26.561 { 00:05:26.561 "subsystems": [ 00:05:26.561 { 00:05:26.561 "subsystem": "vfio_user_target", 00:05:26.561 "config": null 00:05:26.561 }, 00:05:26.561 { 00:05:26.561 "subsystem": "keyring", 00:05:26.561 "config": [] 00:05:26.561 }, 00:05:26.561 { 00:05:26.561 "subsystem": "iobuf", 00:05:26.561 "config": [ 00:05:26.561 { 00:05:26.561 "method": "iobuf_set_options", 00:05:26.561 "params": { 00:05:26.561 "small_pool_count": 8192, 00:05:26.561 "large_pool_count": 1024, 00:05:26.561 "small_bufsize": 8192, 00:05:26.561 "large_bufsize": 135168 00:05:26.561 } 00:05:26.561 } 00:05:26.561 ] 00:05:26.561 }, 00:05:26.561 { 00:05:26.561 "subsystem": "sock", 00:05:26.561 "config": [ 00:05:26.561 { 00:05:26.561 "method": "sock_set_default_impl", 00:05:26.561 "params": { 00:05:26.561 "impl_name": "posix" 00:05:26.561 } 00:05:26.561 }, 00:05:26.561 { 00:05:26.561 "method": "sock_impl_set_options", 00:05:26.561 "params": { 00:05:26.561 "impl_name": "ssl", 00:05:26.561 "recv_buf_size": 4096, 00:05:26.561 "send_buf_size": 4096, 00:05:26.561 "enable_recv_pipe": true, 00:05:26.561 "enable_quickack": false, 00:05:26.561 "enable_placement_id": 0, 00:05:26.561 "enable_zerocopy_send_server": true, 00:05:26.561 "enable_zerocopy_send_client": false, 00:05:26.561 "zerocopy_threshold": 0, 00:05:26.561 "tls_version": 0, 00:05:26.561 "enable_ktls": false 00:05:26.561 } 00:05:26.561 }, 00:05:26.561 { 00:05:26.561 "method": "sock_impl_set_options", 00:05:26.561 "params": { 00:05:26.561 "impl_name": "posix", 00:05:26.561 "recv_buf_size": 2097152, 00:05:26.561 "send_buf_size": 2097152, 00:05:26.561 "enable_recv_pipe": true, 00:05:26.561 "enable_quickack": false, 00:05:26.561 "enable_placement_id": 0, 00:05:26.561 "enable_zerocopy_send_server": true, 00:05:26.561 "enable_zerocopy_send_client": false, 00:05:26.561 "zerocopy_threshold": 0, 00:05:26.561 "tls_version": 0, 00:05:26.561 "enable_ktls": false 00:05:26.561 } 00:05:26.561 } 00:05:26.561 ] 00:05:26.561 }, 00:05:26.561 { 00:05:26.561 "subsystem": "vmd", 00:05:26.561 "config": [] 00:05:26.561 }, 00:05:26.561 { 00:05:26.561 "subsystem": "accel", 00:05:26.561 "config": [ 00:05:26.561 { 00:05:26.561 "method": "accel_set_options", 00:05:26.561 "params": { 00:05:26.561 "small_cache_size": 128, 00:05:26.561 "large_cache_size": 16, 00:05:26.561 "task_count": 2048, 00:05:26.561 "sequence_count": 2048, 00:05:26.561 "buf_count": 2048 00:05:26.561 } 00:05:26.561 } 00:05:26.561 ] 00:05:26.561 }, 00:05:26.561 { 00:05:26.561 "subsystem": "bdev", 00:05:26.561 "config": [ 00:05:26.561 { 00:05:26.561 "method": "bdev_set_options", 00:05:26.561 "params": { 00:05:26.561 "bdev_io_pool_size": 65535, 00:05:26.562 "bdev_io_cache_size": 256, 00:05:26.562 "bdev_auto_examine": true, 00:05:26.562 "iobuf_small_cache_size": 128, 00:05:26.562 "iobuf_large_cache_size": 16 00:05:26.562 } 00:05:26.562 }, 00:05:26.562 { 00:05:26.562 "method": "bdev_raid_set_options", 00:05:26.562 "params": { 00:05:26.562 "process_window_size_kb": 1024, 00:05:26.562 "process_max_bandwidth_mb_sec": 0 00:05:26.562 } 00:05:26.562 }, 00:05:26.562 { 00:05:26.562 "method": "bdev_iscsi_set_options", 00:05:26.562 "params": { 00:05:26.562 "timeout_sec": 30 00:05:26.562 } 00:05:26.562 }, 00:05:26.562 { 00:05:26.562 "method": "bdev_nvme_set_options", 00:05:26.562 "params": { 00:05:26.562 "action_on_timeout": "none", 00:05:26.562 "timeout_us": 0, 00:05:26.562 "timeout_admin_us": 0, 00:05:26.562 "keep_alive_timeout_ms": 10000, 00:05:26.562 "arbitration_burst": 0, 00:05:26.562 "low_priority_weight": 0, 00:05:26.562 "medium_priority_weight": 0, 00:05:26.562 "high_priority_weight": 0, 00:05:26.562 "nvme_adminq_poll_period_us": 10000, 00:05:26.562 "nvme_ioq_poll_period_us": 0, 00:05:26.562 "io_queue_requests": 0, 00:05:26.562 "delay_cmd_submit": true, 00:05:26.562 "transport_retry_count": 4, 00:05:26.562 "bdev_retry_count": 3, 00:05:26.562 "transport_ack_timeout": 0, 00:05:26.562 "ctrlr_loss_timeout_sec": 0, 00:05:26.562 "reconnect_delay_sec": 0, 00:05:26.562 "fast_io_fail_timeout_sec": 0, 00:05:26.562 "disable_auto_failback": false, 00:05:26.562 "generate_uuids": false, 00:05:26.562 "transport_tos": 0, 00:05:26.562 "nvme_error_stat": false, 00:05:26.562 "rdma_srq_size": 0, 00:05:26.562 "io_path_stat": false, 00:05:26.562 "allow_accel_sequence": false, 00:05:26.562 "rdma_max_cq_size": 0, 00:05:26.562 "rdma_cm_event_timeout_ms": 0, 00:05:26.562 "dhchap_digests": [ 00:05:26.562 "sha256", 00:05:26.562 "sha384", 00:05:26.562 "sha512" 00:05:26.562 ], 00:05:26.562 "dhchap_dhgroups": [ 00:05:26.562 "null", 00:05:26.562 "ffdhe2048", 00:05:26.562 "ffdhe3072", 00:05:26.562 "ffdhe4096", 00:05:26.562 "ffdhe6144", 00:05:26.562 "ffdhe8192" 00:05:26.562 ] 00:05:26.562 } 00:05:26.562 }, 00:05:26.562 { 00:05:26.562 "method": "bdev_nvme_set_hotplug", 00:05:26.562 "params": { 00:05:26.562 "period_us": 100000, 00:05:26.562 "enable": false 00:05:26.562 } 00:05:26.562 }, 00:05:26.562 { 00:05:26.562 "method": "bdev_wait_for_examine" 00:05:26.562 } 00:05:26.562 ] 00:05:26.562 }, 00:05:26.562 { 00:05:26.562 "subsystem": "scsi", 00:05:26.562 "config": null 00:05:26.562 }, 00:05:26.562 { 00:05:26.562 "subsystem": "scheduler", 00:05:26.562 "config": [ 00:05:26.562 { 00:05:26.562 "method": "framework_set_scheduler", 00:05:26.562 "params": { 00:05:26.562 "name": "static" 00:05:26.562 } 00:05:26.562 } 00:05:26.562 ] 00:05:26.562 }, 00:05:26.562 { 00:05:26.562 "subsystem": "vhost_scsi", 00:05:26.562 "config": [] 00:05:26.562 }, 00:05:26.562 { 00:05:26.562 "subsystem": "vhost_blk", 00:05:26.562 "config": [] 00:05:26.562 }, 00:05:26.562 { 00:05:26.562 "subsystem": "ublk", 00:05:26.562 "config": [] 00:05:26.562 }, 00:05:26.562 { 00:05:26.562 "subsystem": "nbd", 00:05:26.562 "config": [] 00:05:26.562 }, 00:05:26.562 { 00:05:26.562 "subsystem": "nvmf", 00:05:26.562 "config": [ 00:05:26.562 { 00:05:26.562 "method": "nvmf_set_config", 00:05:26.562 "params": { 00:05:26.562 "discovery_filter": "match_any", 00:05:26.562 "admin_cmd_passthru": { 00:05:26.562 "identify_ctrlr": false 00:05:26.562 } 00:05:26.562 } 00:05:26.562 }, 00:05:26.562 { 00:05:26.562 "method": "nvmf_set_max_subsystems", 00:05:26.562 "params": { 00:05:26.562 "max_subsystems": 1024 00:05:26.562 } 00:05:26.562 }, 00:05:26.562 { 00:05:26.562 "method": "nvmf_set_crdt", 00:05:26.562 "params": { 00:05:26.562 "crdt1": 0, 00:05:26.562 "crdt2": 0, 00:05:26.562 "crdt3": 0 00:05:26.562 } 00:05:26.562 }, 00:05:26.562 { 00:05:26.562 "method": "nvmf_create_transport", 00:05:26.562 "params": { 00:05:26.562 "trtype": "TCP", 00:05:26.562 "max_queue_depth": 128, 00:05:26.562 "max_io_qpairs_per_ctrlr": 127, 00:05:26.562 "in_capsule_data_size": 4096, 00:05:26.562 "max_io_size": 131072, 00:05:26.562 "io_unit_size": 131072, 00:05:26.562 "max_aq_depth": 128, 00:05:26.562 "num_shared_buffers": 511, 00:05:26.562 "buf_cache_size": 4294967295, 00:05:26.562 "dif_insert_or_strip": false, 00:05:26.562 "zcopy": false, 00:05:26.562 "c2h_success": true, 00:05:26.562 "sock_priority": 0, 00:05:26.562 "abort_timeout_sec": 1, 00:05:26.562 "ack_timeout": 0, 00:05:26.562 "data_wr_pool_size": 0 00:05:26.562 } 00:05:26.562 } 00:05:26.562 ] 00:05:26.562 }, 00:05:26.562 { 00:05:26.562 "subsystem": "iscsi", 00:05:26.562 "config": [ 00:05:26.562 { 00:05:26.562 "method": "iscsi_set_options", 00:05:26.562 "params": { 00:05:26.562 "node_base": "iqn.2016-06.io.spdk", 00:05:26.562 "max_sessions": 128, 00:05:26.562 "max_connections_per_session": 2, 00:05:26.562 "max_queue_depth": 64, 00:05:26.562 "default_time2wait": 2, 00:05:26.562 "default_time2retain": 20, 00:05:26.562 "first_burst_length": 8192, 00:05:26.562 "immediate_data": true, 00:05:26.562 "allow_duplicated_isid": false, 00:05:26.562 "error_recovery_level": 0, 00:05:26.562 "nop_timeout": 60, 00:05:26.562 "nop_in_interval": 30, 00:05:26.562 "disable_chap": false, 00:05:26.562 "require_chap": false, 00:05:26.562 "mutual_chap": false, 00:05:26.562 "chap_group": 0, 00:05:26.562 "max_large_datain_per_connection": 64, 00:05:26.562 "max_r2t_per_connection": 4, 00:05:26.562 "pdu_pool_size": 36864, 00:05:26.562 "immediate_data_pool_size": 16384, 00:05:26.562 "data_out_pool_size": 2048 00:05:26.562 } 00:05:26.562 } 00:05:26.562 ] 00:05:26.562 } 00:05:26.562 ] 00:05:26.562 } 00:05:26.562 19:45:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:26.562 19:45:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3459897 00:05:26.562 19:45:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 3459897 ']' 00:05:26.562 19:45:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 3459897 00:05:26.562 19:45:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:26.562 19:45:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:26.562 19:45:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3459897 00:05:26.562 19:45:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:26.562 19:45:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:26.562 19:45:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3459897' 00:05:26.562 killing process with pid 3459897 00:05:26.562 19:45:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 3459897 00:05:26.562 19:45:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 3459897 00:05:26.823 19:45:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3460068 00:05:26.823 19:45:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:26.824 19:45:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:32.112 19:45:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3460068 00:05:32.112 19:45:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 3460068 ']' 00:05:32.112 19:45:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 3460068 00:05:32.112 19:45:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:32.112 19:45:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:32.112 19:45:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3460068 00:05:32.112 19:45:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:32.112 19:45:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:32.112 19:45:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3460068' 00:05:32.112 killing process with pid 3460068 00:05:32.112 19:45:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 3460068 00:05:32.112 19:45:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 3460068 00:05:32.112 19:45:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:32.112 19:45:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:32.112 00:05:32.112 real 0m6.535s 00:05:32.112 user 0m6.431s 00:05:32.112 sys 0m0.506s 00:05:32.112 19:45:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.112 19:45:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:32.112 ************************************ 00:05:32.112 END TEST skip_rpc_with_json 00:05:32.112 ************************************ 00:05:32.112 19:45:19 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:32.112 19:45:19 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.112 19:45:19 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.112 19:45:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.112 ************************************ 00:05:32.112 START TEST skip_rpc_with_delay 00:05:32.112 ************************************ 00:05:32.112 19:45:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:32.112 19:45:19 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:32.112 19:45:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:32.112 19:45:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:32.112 19:45:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:32.112 19:45:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:32.112 19:45:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:32.112 19:45:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:32.112 19:45:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:32.112 19:45:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:32.113 19:45:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:32.113 19:45:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:32.113 19:45:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:32.113 [2024-07-24 19:45:20.053633] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:32.113 [2024-07-24 19:45:20.053736] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:32.374 19:45:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:32.374 19:45:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:32.374 19:45:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:32.374 19:45:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:32.374 00:05:32.374 real 0m0.088s 00:05:32.374 user 0m0.057s 00:05:32.374 sys 0m0.030s 00:05:32.374 19:45:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.374 19:45:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:32.374 ************************************ 00:05:32.374 END TEST skip_rpc_with_delay 00:05:32.374 ************************************ 00:05:32.374 19:45:20 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:32.374 19:45:20 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:32.374 19:45:20 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:32.374 19:45:20 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.374 19:45:20 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.374 19:45:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.374 ************************************ 00:05:32.374 START TEST exit_on_failed_rpc_init 00:05:32.374 ************************************ 00:05:32.374 19:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:32.374 19:45:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3461354 00:05:32.374 19:45:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3461354 00:05:32.374 19:45:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:32.374 19:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 3461354 ']' 00:05:32.374 19:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.374 19:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.374 19:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.374 19:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.374 19:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:32.374 [2024-07-24 19:45:20.192752] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:05:32.374 [2024-07-24 19:45:20.192811] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3461354 ] 00:05:32.374 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.374 [2024-07-24 19:45:20.257610] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.634 [2024-07-24 19:45:20.334605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.206 19:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:33.206 19:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:33.206 19:45:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:33.206 19:45:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:33.206 19:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:33.206 19:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:33.206 19:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:33.206 19:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:33.206 19:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:33.206 19:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:33.206 19:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:33.206 19:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:33.206 19:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:33.206 19:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:33.206 19:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:33.206 [2024-07-24 19:45:21.015723] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:05:33.206 [2024-07-24 19:45:21.015775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3461470 ] 00:05:33.206 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.206 [2024-07-24 19:45:21.090717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.206 [2024-07-24 19:45:21.154723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.206 [2024-07-24 19:45:21.154787] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:33.206 [2024-07-24 19:45:21.154797] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:33.206 [2024-07-24 19:45:21.154803] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:33.468 19:45:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:33.468 19:45:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:33.468 19:45:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:33.468 19:45:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:33.468 19:45:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:33.468 19:45:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:33.468 19:45:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:33.468 19:45:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3461354 00:05:33.468 19:45:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 3461354 ']' 00:05:33.468 19:45:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 3461354 00:05:33.468 19:45:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:33.468 19:45:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:33.468 19:45:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3461354 00:05:33.468 19:45:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:33.468 19:45:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:33.468 19:45:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3461354' 00:05:33.468 killing process with pid 3461354 00:05:33.468 19:45:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 3461354 00:05:33.468 19:45:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 3461354 00:05:33.729 00:05:33.729 real 0m1.345s 00:05:33.729 user 0m1.576s 00:05:33.729 sys 0m0.371s 00:05:33.729 19:45:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.729 19:45:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:33.729 ************************************ 00:05:33.729 END TEST exit_on_failed_rpc_init 00:05:33.729 ************************************ 00:05:33.729 19:45:21 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:33.729 00:05:33.729 real 0m13.651s 00:05:33.729 user 0m13.292s 00:05:33.729 sys 0m1.419s 00:05:33.729 19:45:21 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.729 19:45:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.729 ************************************ 00:05:33.729 END TEST skip_rpc 00:05:33.729 ************************************ 00:05:33.729 19:45:21 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:33.729 19:45:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.729 19:45:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.729 19:45:21 -- common/autotest_common.sh@10 -- # set +x 00:05:33.729 ************************************ 00:05:33.729 START TEST rpc_client 00:05:33.729 ************************************ 00:05:33.730 19:45:21 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:33.730 * Looking for test storage... 00:05:33.730 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:33.730 19:45:21 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:33.730 OK 00:05:33.992 19:45:21 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:33.992 00:05:33.992 real 0m0.101s 00:05:33.992 user 0m0.038s 00:05:33.992 sys 0m0.067s 00:05:33.992 19:45:21 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.992 19:45:21 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:33.992 ************************************ 00:05:33.992 END TEST rpc_client 00:05:33.992 ************************************ 00:05:33.992 19:45:21 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:33.992 19:45:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.992 19:45:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.992 19:45:21 -- common/autotest_common.sh@10 -- # set +x 00:05:33.992 ************************************ 00:05:33.992 START TEST json_config 00:05:33.992 ************************************ 00:05:33.992 19:45:21 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:33.992 19:45:21 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:33.992 19:45:21 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:33.992 19:45:21 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:33.992 19:45:21 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:33.992 19:45:21 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:33.992 19:45:21 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:33.992 19:45:21 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:33.992 19:45:21 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:33.992 19:45:21 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:33.992 19:45:21 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:33.992 19:45:21 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:33.992 19:45:21 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:33.992 19:45:21 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:33.992 19:45:21 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:33.992 19:45:21 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:33.992 19:45:21 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:33.992 19:45:21 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:33.992 19:45:21 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:33.992 19:45:21 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:33.992 19:45:21 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:33.992 19:45:21 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:33.992 19:45:21 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:33.992 19:45:21 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.992 19:45:21 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.992 19:45:21 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.992 19:45:21 json_config -- paths/export.sh@5 -- # export PATH 00:05:33.992 19:45:21 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.992 19:45:21 json_config -- nvmf/common.sh@47 -- # : 0 00:05:33.992 19:45:21 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:33.992 19:45:21 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:33.992 19:45:21 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:33.992 19:45:21 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:33.992 19:45:21 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:33.992 19:45:21 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:33.992 19:45:21 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:33.992 19:45:21 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:33.992 19:45:21 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:33.992 19:45:21 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:33.992 19:45:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:33.992 19:45:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:33.992 19:45:21 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:33.992 19:45:21 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:33.992 19:45:21 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:33.992 19:45:21 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:33.992 19:45:21 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:33.992 19:45:21 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:33.992 19:45:21 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:33.992 19:45:21 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:33.992 19:45:21 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:33.992 19:45:21 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:33.992 19:45:21 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:33.992 19:45:21 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:05:33.992 INFO: JSON configuration test init 00:05:33.992 19:45:21 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:05:33.992 19:45:21 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:05:33.992 19:45:21 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:33.992 19:45:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.992 19:45:21 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:05:33.992 19:45:21 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:33.992 19:45:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.992 19:45:21 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:05:33.992 19:45:21 json_config -- json_config/common.sh@9 -- # local app=target 00:05:33.992 19:45:21 json_config -- json_config/common.sh@10 -- # shift 00:05:33.992 19:45:21 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:33.992 19:45:21 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:33.992 19:45:21 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:33.992 19:45:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:33.992 19:45:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:33.992 19:45:21 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3461826 00:05:33.992 19:45:21 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:33.992 Waiting for target to run... 00:05:33.992 19:45:21 json_config -- json_config/common.sh@25 -- # waitforlisten 3461826 /var/tmp/spdk_tgt.sock 00:05:33.992 19:45:21 json_config -- common/autotest_common.sh@831 -- # '[' -z 3461826 ']' 00:05:33.992 19:45:21 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:33.993 19:45:21 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:33.993 19:45:21 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:33.993 19:45:21 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:33.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:33.993 19:45:21 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:33.993 19:45:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.254 [2024-07-24 19:45:21.945823] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:05:34.254 [2024-07-24 19:45:21.945899] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3461826 ] 00:05:34.254 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.514 [2024-07-24 19:45:22.249066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.514 [2024-07-24 19:45:22.306041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.775 19:45:22 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:34.775 19:45:22 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:34.775 19:45:22 json_config -- json_config/common.sh@26 -- # echo '' 00:05:34.775 00:05:34.775 19:45:22 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:05:34.775 19:45:22 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:05:34.775 19:45:22 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:34.775 19:45:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.775 19:45:22 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:05:34.775 19:45:22 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:05:34.775 19:45:22 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:34.775 19:45:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.035 19:45:22 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:35.035 19:45:22 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:05:35.035 19:45:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:35.606 19:45:23 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:05:35.606 19:45:23 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:35.606 19:45:23 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:35.606 19:45:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.606 19:45:23 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:35.606 19:45:23 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:35.606 19:45:23 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:35.606 19:45:23 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:35.606 19:45:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:35.606 19:45:23 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:35.606 19:45:23 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:35.606 19:45:23 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:35.606 19:45:23 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:05:35.606 19:45:23 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:05:35.606 19:45:23 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:05:35.606 19:45:23 json_config -- json_config/json_config.sh@51 -- # sort 00:05:35.606 19:45:23 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:05:35.606 19:45:23 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:05:35.606 19:45:23 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:05:35.606 19:45:23 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:05:35.606 19:45:23 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:35.606 19:45:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.606 19:45:23 json_config -- json_config/json_config.sh@59 -- # return 0 00:05:35.606 19:45:23 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:35.606 19:45:23 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:35.606 19:45:23 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:05:35.606 19:45:23 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:05:35.606 19:45:23 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:05:35.606 19:45:23 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:05:35.606 19:45:23 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:35.606 19:45:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.606 19:45:23 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:35.606 19:45:23 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:05:35.606 19:45:23 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:05:35.606 19:45:23 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:35.606 19:45:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:35.866 MallocForNvmf0 00:05:35.866 19:45:23 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:35.866 19:45:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:36.126 MallocForNvmf1 00:05:36.126 19:45:23 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:36.126 19:45:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:36.126 [2024-07-24 19:45:23.964361] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:36.126 19:45:23 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:36.126 19:45:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:36.387 19:45:24 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:36.387 19:45:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:36.387 19:45:24 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:36.387 19:45:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:36.647 19:45:24 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:36.647 19:45:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:36.647 [2024-07-24 19:45:24.550306] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:36.647 19:45:24 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:05:36.647 19:45:24 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:36.647 19:45:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.908 19:45:24 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:05:36.908 19:45:24 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:36.908 19:45:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.908 19:45:24 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:05:36.908 19:45:24 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:36.908 19:45:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:36.908 MallocBdevForConfigChangeCheck 00:05:36.908 19:45:24 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:05:36.909 19:45:24 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:36.909 19:45:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.909 19:45:24 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:05:36.909 19:45:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:37.480 19:45:25 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:05:37.480 INFO: shutting down applications... 00:05:37.480 19:45:25 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:05:37.480 19:45:25 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:05:37.480 19:45:25 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:05:37.480 19:45:25 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:37.741 Calling clear_iscsi_subsystem 00:05:37.741 Calling clear_nvmf_subsystem 00:05:37.741 Calling clear_nbd_subsystem 00:05:37.741 Calling clear_ublk_subsystem 00:05:37.741 Calling clear_vhost_blk_subsystem 00:05:37.741 Calling clear_vhost_scsi_subsystem 00:05:37.741 Calling clear_bdev_subsystem 00:05:37.741 19:45:25 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:37.741 19:45:25 json_config -- json_config/json_config.sh@347 -- # count=100 00:05:37.741 19:45:25 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:05:37.741 19:45:25 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:37.741 19:45:25 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:37.741 19:45:25 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:38.009 19:45:25 json_config -- json_config/json_config.sh@349 -- # break 00:05:38.009 19:45:25 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:05:38.010 19:45:25 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:05:38.010 19:45:25 json_config -- json_config/common.sh@31 -- # local app=target 00:05:38.010 19:45:25 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:38.010 19:45:25 json_config -- json_config/common.sh@35 -- # [[ -n 3461826 ]] 00:05:38.010 19:45:25 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3461826 00:05:38.010 19:45:25 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:38.010 19:45:25 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:38.010 19:45:25 json_config -- json_config/common.sh@41 -- # kill -0 3461826 00:05:38.010 19:45:25 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:38.629 19:45:26 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:38.629 19:45:26 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:38.629 19:45:26 json_config -- json_config/common.sh@41 -- # kill -0 3461826 00:05:38.629 19:45:26 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:38.629 19:45:26 json_config -- json_config/common.sh@43 -- # break 00:05:38.629 19:45:26 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:38.629 19:45:26 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:38.629 SPDK target shutdown done 00:05:38.629 19:45:26 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:05:38.629 INFO: relaunching applications... 00:05:38.629 19:45:26 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.629 19:45:26 json_config -- json_config/common.sh@9 -- # local app=target 00:05:38.629 19:45:26 json_config -- json_config/common.sh@10 -- # shift 00:05:38.629 19:45:26 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:38.629 19:45:26 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:38.629 19:45:26 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:38.629 19:45:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:38.629 19:45:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:38.629 19:45:26 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3462741 00:05:38.629 19:45:26 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:38.629 Waiting for target to run... 00:05:38.629 19:45:26 json_config -- json_config/common.sh@25 -- # waitforlisten 3462741 /var/tmp/spdk_tgt.sock 00:05:38.629 19:45:26 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.629 19:45:26 json_config -- common/autotest_common.sh@831 -- # '[' -z 3462741 ']' 00:05:38.629 19:45:26 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:38.629 19:45:26 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:38.629 19:45:26 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:38.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:38.629 19:45:26 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:38.629 19:45:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.629 [2024-07-24 19:45:26.499763] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:05:38.629 [2024-07-24 19:45:26.499822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3462741 ] 00:05:38.629 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.890 [2024-07-24 19:45:26.739740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.890 [2024-07-24 19:45:26.790040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.461 [2024-07-24 19:45:27.281174] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:39.461 [2024-07-24 19:45:27.313564] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:39.461 19:45:27 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:39.461 19:45:27 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:39.461 19:45:27 json_config -- json_config/common.sh@26 -- # echo '' 00:05:39.461 00:05:39.461 19:45:27 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:05:39.461 19:45:27 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:39.461 INFO: Checking if target configuration is the same... 00:05:39.461 19:45:27 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:05:39.461 19:45:27 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:39.461 19:45:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:39.461 + '[' 2 -ne 2 ']' 00:05:39.461 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:39.461 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:39.461 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:39.461 +++ basename /dev/fd/62 00:05:39.461 ++ mktemp /tmp/62.XXX 00:05:39.461 + tmp_file_1=/tmp/62.V5Z 00:05:39.461 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:39.461 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:39.461 + tmp_file_2=/tmp/spdk_tgt_config.json.Cnn 00:05:39.461 + ret=0 00:05:39.462 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:39.723 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:39.983 + diff -u /tmp/62.V5Z /tmp/spdk_tgt_config.json.Cnn 00:05:39.983 + echo 'INFO: JSON config files are the same' 00:05:39.983 INFO: JSON config files are the same 00:05:39.983 + rm /tmp/62.V5Z /tmp/spdk_tgt_config.json.Cnn 00:05:39.983 + exit 0 00:05:39.983 19:45:27 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:05:39.983 19:45:27 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:39.983 INFO: changing configuration and checking if this can be detected... 00:05:39.983 19:45:27 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:39.983 19:45:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:39.983 19:45:27 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:39.983 19:45:27 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:05:39.983 19:45:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:39.983 + '[' 2 -ne 2 ']' 00:05:39.983 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:39.983 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:39.983 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:39.983 +++ basename /dev/fd/62 00:05:39.983 ++ mktemp /tmp/62.XXX 00:05:39.983 + tmp_file_1=/tmp/62.HJb 00:05:39.983 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:39.983 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:39.983 + tmp_file_2=/tmp/spdk_tgt_config.json.4lb 00:05:39.983 + ret=0 00:05:39.983 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:40.243 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:40.503 + diff -u /tmp/62.HJb /tmp/spdk_tgt_config.json.4lb 00:05:40.503 + ret=1 00:05:40.503 + echo '=== Start of file: /tmp/62.HJb ===' 00:05:40.503 + cat /tmp/62.HJb 00:05:40.503 + echo '=== End of file: /tmp/62.HJb ===' 00:05:40.503 + echo '' 00:05:40.503 + echo '=== Start of file: /tmp/spdk_tgt_config.json.4lb ===' 00:05:40.503 + cat /tmp/spdk_tgt_config.json.4lb 00:05:40.503 + echo '=== End of file: /tmp/spdk_tgt_config.json.4lb ===' 00:05:40.503 + echo '' 00:05:40.503 + rm /tmp/62.HJb /tmp/spdk_tgt_config.json.4lb 00:05:40.503 + exit 1 00:05:40.503 19:45:28 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:05:40.503 INFO: configuration change detected. 00:05:40.503 19:45:28 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:05:40.503 19:45:28 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:05:40.503 19:45:28 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:40.503 19:45:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.503 19:45:28 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:05:40.504 19:45:28 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:05:40.504 19:45:28 json_config -- json_config/json_config.sh@321 -- # [[ -n 3462741 ]] 00:05:40.504 19:45:28 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:05:40.504 19:45:28 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:05:40.504 19:45:28 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:40.504 19:45:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.504 19:45:28 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:05:40.504 19:45:28 json_config -- json_config/json_config.sh@197 -- # uname -s 00:05:40.504 19:45:28 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:05:40.504 19:45:28 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:05:40.504 19:45:28 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:05:40.504 19:45:28 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:05:40.504 19:45:28 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:40.504 19:45:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.504 19:45:28 json_config -- json_config/json_config.sh@327 -- # killprocess 3462741 00:05:40.504 19:45:28 json_config -- common/autotest_common.sh@950 -- # '[' -z 3462741 ']' 00:05:40.504 19:45:28 json_config -- common/autotest_common.sh@954 -- # kill -0 3462741 00:05:40.504 19:45:28 json_config -- common/autotest_common.sh@955 -- # uname 00:05:40.504 19:45:28 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:40.504 19:45:28 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3462741 00:05:40.504 19:45:28 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:40.504 19:45:28 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:40.504 19:45:28 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3462741' 00:05:40.504 killing process with pid 3462741 00:05:40.504 19:45:28 json_config -- common/autotest_common.sh@969 -- # kill 3462741 00:05:40.504 19:45:28 json_config -- common/autotest_common.sh@974 -- # wait 3462741 00:05:40.764 19:45:28 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:40.764 19:45:28 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:05:40.764 19:45:28 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:40.764 19:45:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.764 19:45:28 json_config -- json_config/json_config.sh@332 -- # return 0 00:05:40.764 19:45:28 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:05:40.764 INFO: Success 00:05:40.764 00:05:40.764 real 0m6.908s 00:05:40.764 user 0m8.408s 00:05:40.764 sys 0m1.643s 00:05:40.764 19:45:28 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.764 19:45:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.764 ************************************ 00:05:40.764 END TEST json_config 00:05:40.764 ************************************ 00:05:40.764 19:45:28 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:40.764 19:45:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.764 19:45:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.764 19:45:28 -- common/autotest_common.sh@10 -- # set +x 00:05:41.024 ************************************ 00:05:41.024 START TEST json_config_extra_key 00:05:41.024 ************************************ 00:05:41.024 19:45:28 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:41.024 19:45:28 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:41.024 19:45:28 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:41.024 19:45:28 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:41.024 19:45:28 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:41.024 19:45:28 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:41.024 19:45:28 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:41.024 19:45:28 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:41.024 19:45:28 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:41.024 19:45:28 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:41.024 19:45:28 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:41.024 19:45:28 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:41.024 19:45:28 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:41.024 19:45:28 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:41.024 19:45:28 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:41.024 19:45:28 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:41.024 19:45:28 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:41.024 19:45:28 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:41.024 19:45:28 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:41.024 19:45:28 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:41.024 19:45:28 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.024 19:45:28 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.024 19:45:28 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.024 19:45:28 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.024 19:45:28 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.024 19:45:28 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.024 19:45:28 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:41.024 19:45:28 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.024 19:45:28 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:41.024 19:45:28 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:41.024 19:45:28 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:41.024 19:45:28 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:41.024 19:45:28 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:41.024 19:45:28 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:41.024 19:45:28 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:41.024 19:45:28 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:41.024 19:45:28 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:41.024 19:45:28 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:41.024 19:45:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:41.024 19:45:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:41.024 19:45:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:41.024 19:45:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:41.024 19:45:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:41.024 19:45:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:41.024 19:45:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:41.024 19:45:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:41.024 19:45:28 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:41.024 19:45:28 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:41.024 INFO: launching applications... 00:05:41.024 19:45:28 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:41.024 19:45:28 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:41.024 19:45:28 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:41.024 19:45:28 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:41.024 19:45:28 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:41.024 19:45:28 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:41.024 19:45:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.024 19:45:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.024 19:45:28 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3463494 00:05:41.024 19:45:28 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:41.024 Waiting for target to run... 00:05:41.024 19:45:28 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3463494 /var/tmp/spdk_tgt.sock 00:05:41.024 19:45:28 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 3463494 ']' 00:05:41.024 19:45:28 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:41.024 19:45:28 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:41.024 19:45:28 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.024 19:45:28 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:41.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:41.024 19:45:28 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.024 19:45:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:41.024 [2024-07-24 19:45:28.895883] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:05:41.024 [2024-07-24 19:45:28.895950] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3463494 ] 00:05:41.024 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.285 [2024-07-24 19:45:29.155288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.285 [2024-07-24 19:45:29.207263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.856 19:45:29 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.856 19:45:29 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:41.856 19:45:29 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:41.856 00:05:41.856 19:45:29 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:41.856 INFO: shutting down applications... 00:05:41.856 19:45:29 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:41.856 19:45:29 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:41.856 19:45:29 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:41.856 19:45:29 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3463494 ]] 00:05:41.856 19:45:29 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3463494 00:05:41.856 19:45:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:41.856 19:45:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:41.856 19:45:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3463494 00:05:41.856 19:45:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:42.428 19:45:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:42.428 19:45:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:42.428 19:45:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3463494 00:05:42.428 19:45:30 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:42.428 19:45:30 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:42.428 19:45:30 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:42.428 19:45:30 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:42.428 SPDK target shutdown done 00:05:42.428 19:45:30 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:42.428 Success 00:05:42.428 00:05:42.428 real 0m1.414s 00:05:42.428 user 0m1.056s 00:05:42.428 sys 0m0.362s 00:05:42.428 19:45:30 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.428 19:45:30 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:42.428 ************************************ 00:05:42.428 END TEST json_config_extra_key 00:05:42.428 ************************************ 00:05:42.428 19:45:30 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:42.428 19:45:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.428 19:45:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.428 19:45:30 -- common/autotest_common.sh@10 -- # set +x 00:05:42.428 ************************************ 00:05:42.428 START TEST alias_rpc 00:05:42.428 ************************************ 00:05:42.428 19:45:30 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:42.428 * Looking for test storage... 00:05:42.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:42.428 19:45:30 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:42.428 19:45:30 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3463842 00:05:42.428 19:45:30 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3463842 00:05:42.428 19:45:30 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:42.428 19:45:30 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 3463842 ']' 00:05:42.428 19:45:30 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.428 19:45:30 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:42.428 19:45:30 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.428 19:45:30 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:42.428 19:45:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.689 [2024-07-24 19:45:30.387867] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:05:42.689 [2024-07-24 19:45:30.387943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3463842 ] 00:05:42.689 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.689 [2024-07-24 19:45:30.451075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.689 [2024-07-24 19:45:30.526046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.260 19:45:31 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:43.260 19:45:31 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:43.260 19:45:31 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:43.521 19:45:31 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3463842 00:05:43.521 19:45:31 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 3463842 ']' 00:05:43.521 19:45:31 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 3463842 00:05:43.521 19:45:31 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:43.521 19:45:31 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:43.521 19:45:31 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3463842 00:05:43.521 19:45:31 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:43.521 19:45:31 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:43.521 19:45:31 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3463842' 00:05:43.521 killing process with pid 3463842 00:05:43.521 19:45:31 alias_rpc -- common/autotest_common.sh@969 -- # kill 3463842 00:05:43.521 19:45:31 alias_rpc -- common/autotest_common.sh@974 -- # wait 3463842 00:05:43.781 00:05:43.781 real 0m1.353s 00:05:43.781 user 0m1.449s 00:05:43.781 sys 0m0.393s 00:05:43.781 19:45:31 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.781 19:45:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.781 ************************************ 00:05:43.781 END TEST alias_rpc 00:05:43.781 ************************************ 00:05:43.781 19:45:31 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:43.781 19:45:31 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:43.781 19:45:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.781 19:45:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.781 19:45:31 -- common/autotest_common.sh@10 -- # set +x 00:05:43.781 ************************************ 00:05:43.781 START TEST spdkcli_tcp 00:05:43.781 ************************************ 00:05:43.781 19:45:31 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:43.781 * Looking for test storage... 00:05:44.042 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:44.042 19:45:31 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:44.042 19:45:31 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:44.042 19:45:31 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:44.042 19:45:31 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:44.042 19:45:31 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:44.042 19:45:31 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:44.042 19:45:31 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:44.042 19:45:31 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:44.043 19:45:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:44.043 19:45:31 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3464104 00:05:44.043 19:45:31 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3464104 00:05:44.043 19:45:31 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:44.043 19:45:31 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 3464104 ']' 00:05:44.043 19:45:31 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.043 19:45:31 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.043 19:45:31 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.043 19:45:31 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.043 19:45:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:44.043 [2024-07-24 19:45:31.810397] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:05:44.043 [2024-07-24 19:45:31.810469] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3464104 ] 00:05:44.043 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.043 [2024-07-24 19:45:31.874128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.043 [2024-07-24 19:45:31.954079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.043 [2024-07-24 19:45:31.954082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.983 19:45:32 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:44.983 19:45:32 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:44.983 19:45:32 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3464286 00:05:44.983 19:45:32 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:44.983 19:45:32 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:44.983 [ 00:05:44.983 "bdev_malloc_delete", 00:05:44.983 "bdev_malloc_create", 00:05:44.983 "bdev_null_resize", 00:05:44.983 "bdev_null_delete", 00:05:44.983 "bdev_null_create", 00:05:44.983 "bdev_nvme_cuse_unregister", 00:05:44.983 "bdev_nvme_cuse_register", 00:05:44.983 "bdev_opal_new_user", 00:05:44.983 "bdev_opal_set_lock_state", 00:05:44.983 "bdev_opal_delete", 00:05:44.983 "bdev_opal_get_info", 00:05:44.983 "bdev_opal_create", 00:05:44.983 "bdev_nvme_opal_revert", 00:05:44.983 "bdev_nvme_opal_init", 00:05:44.983 "bdev_nvme_send_cmd", 00:05:44.983 "bdev_nvme_get_path_iostat", 00:05:44.983 "bdev_nvme_get_mdns_discovery_info", 00:05:44.983 "bdev_nvme_stop_mdns_discovery", 00:05:44.983 "bdev_nvme_start_mdns_discovery", 00:05:44.983 "bdev_nvme_set_multipath_policy", 00:05:44.983 "bdev_nvme_set_preferred_path", 00:05:44.983 "bdev_nvme_get_io_paths", 00:05:44.983 "bdev_nvme_remove_error_injection", 00:05:44.983 "bdev_nvme_add_error_injection", 00:05:44.983 "bdev_nvme_get_discovery_info", 00:05:44.983 "bdev_nvme_stop_discovery", 00:05:44.984 "bdev_nvme_start_discovery", 00:05:44.984 "bdev_nvme_get_controller_health_info", 00:05:44.984 "bdev_nvme_disable_controller", 00:05:44.984 "bdev_nvme_enable_controller", 00:05:44.984 "bdev_nvme_reset_controller", 00:05:44.984 "bdev_nvme_get_transport_statistics", 00:05:44.984 "bdev_nvme_apply_firmware", 00:05:44.984 "bdev_nvme_detach_controller", 00:05:44.984 "bdev_nvme_get_controllers", 00:05:44.984 "bdev_nvme_attach_controller", 00:05:44.984 "bdev_nvme_set_hotplug", 00:05:44.984 "bdev_nvme_set_options", 00:05:44.984 "bdev_passthru_delete", 00:05:44.984 "bdev_passthru_create", 00:05:44.984 "bdev_lvol_set_parent_bdev", 00:05:44.984 "bdev_lvol_set_parent", 00:05:44.984 "bdev_lvol_check_shallow_copy", 00:05:44.984 "bdev_lvol_start_shallow_copy", 00:05:44.984 "bdev_lvol_grow_lvstore", 00:05:44.984 "bdev_lvol_get_lvols", 00:05:44.984 "bdev_lvol_get_lvstores", 00:05:44.984 "bdev_lvol_delete", 00:05:44.984 "bdev_lvol_set_read_only", 00:05:44.984 "bdev_lvol_resize", 00:05:44.984 "bdev_lvol_decouple_parent", 00:05:44.984 "bdev_lvol_inflate", 00:05:44.984 "bdev_lvol_rename", 00:05:44.984 "bdev_lvol_clone_bdev", 00:05:44.984 "bdev_lvol_clone", 00:05:44.984 "bdev_lvol_snapshot", 00:05:44.984 "bdev_lvol_create", 00:05:44.984 "bdev_lvol_delete_lvstore", 00:05:44.984 "bdev_lvol_rename_lvstore", 00:05:44.984 "bdev_lvol_create_lvstore", 00:05:44.984 "bdev_raid_set_options", 00:05:44.984 "bdev_raid_remove_base_bdev", 00:05:44.984 "bdev_raid_add_base_bdev", 00:05:44.984 "bdev_raid_delete", 00:05:44.984 "bdev_raid_create", 00:05:44.984 "bdev_raid_get_bdevs", 00:05:44.984 "bdev_error_inject_error", 00:05:44.984 "bdev_error_delete", 00:05:44.984 "bdev_error_create", 00:05:44.984 "bdev_split_delete", 00:05:44.984 "bdev_split_create", 00:05:44.984 "bdev_delay_delete", 00:05:44.984 "bdev_delay_create", 00:05:44.984 "bdev_delay_update_latency", 00:05:44.984 "bdev_zone_block_delete", 00:05:44.984 "bdev_zone_block_create", 00:05:44.984 "blobfs_create", 00:05:44.984 "blobfs_detect", 00:05:44.984 "blobfs_set_cache_size", 00:05:44.984 "bdev_aio_delete", 00:05:44.984 "bdev_aio_rescan", 00:05:44.984 "bdev_aio_create", 00:05:44.984 "bdev_ftl_set_property", 00:05:44.984 "bdev_ftl_get_properties", 00:05:44.984 "bdev_ftl_get_stats", 00:05:44.984 "bdev_ftl_unmap", 00:05:44.984 "bdev_ftl_unload", 00:05:44.984 "bdev_ftl_delete", 00:05:44.984 "bdev_ftl_load", 00:05:44.984 "bdev_ftl_create", 00:05:44.984 "bdev_virtio_attach_controller", 00:05:44.984 "bdev_virtio_scsi_get_devices", 00:05:44.984 "bdev_virtio_detach_controller", 00:05:44.984 "bdev_virtio_blk_set_hotplug", 00:05:44.984 "bdev_iscsi_delete", 00:05:44.984 "bdev_iscsi_create", 00:05:44.984 "bdev_iscsi_set_options", 00:05:44.984 "accel_error_inject_error", 00:05:44.984 "ioat_scan_accel_module", 00:05:44.984 "dsa_scan_accel_module", 00:05:44.984 "iaa_scan_accel_module", 00:05:44.984 "vfu_virtio_create_scsi_endpoint", 00:05:44.984 "vfu_virtio_scsi_remove_target", 00:05:44.984 "vfu_virtio_scsi_add_target", 00:05:44.984 "vfu_virtio_create_blk_endpoint", 00:05:44.984 "vfu_virtio_delete_endpoint", 00:05:44.984 "keyring_file_remove_key", 00:05:44.984 "keyring_file_add_key", 00:05:44.984 "keyring_linux_set_options", 00:05:44.984 "iscsi_get_histogram", 00:05:44.984 "iscsi_enable_histogram", 00:05:44.984 "iscsi_set_options", 00:05:44.984 "iscsi_get_auth_groups", 00:05:44.984 "iscsi_auth_group_remove_secret", 00:05:44.984 "iscsi_auth_group_add_secret", 00:05:44.984 "iscsi_delete_auth_group", 00:05:44.984 "iscsi_create_auth_group", 00:05:44.984 "iscsi_set_discovery_auth", 00:05:44.984 "iscsi_get_options", 00:05:44.984 "iscsi_target_node_request_logout", 00:05:44.984 "iscsi_target_node_set_redirect", 00:05:44.984 "iscsi_target_node_set_auth", 00:05:44.984 "iscsi_target_node_add_lun", 00:05:44.984 "iscsi_get_stats", 00:05:44.984 "iscsi_get_connections", 00:05:44.984 "iscsi_portal_group_set_auth", 00:05:44.984 "iscsi_start_portal_group", 00:05:44.984 "iscsi_delete_portal_group", 00:05:44.984 "iscsi_create_portal_group", 00:05:44.984 "iscsi_get_portal_groups", 00:05:44.984 "iscsi_delete_target_node", 00:05:44.984 "iscsi_target_node_remove_pg_ig_maps", 00:05:44.984 "iscsi_target_node_add_pg_ig_maps", 00:05:44.984 "iscsi_create_target_node", 00:05:44.984 "iscsi_get_target_nodes", 00:05:44.984 "iscsi_delete_initiator_group", 00:05:44.984 "iscsi_initiator_group_remove_initiators", 00:05:44.984 "iscsi_initiator_group_add_initiators", 00:05:44.984 "iscsi_create_initiator_group", 00:05:44.984 "iscsi_get_initiator_groups", 00:05:44.984 "nvmf_set_crdt", 00:05:44.984 "nvmf_set_config", 00:05:44.984 "nvmf_set_max_subsystems", 00:05:44.984 "nvmf_stop_mdns_prr", 00:05:44.984 "nvmf_publish_mdns_prr", 00:05:44.984 "nvmf_subsystem_get_listeners", 00:05:44.984 "nvmf_subsystem_get_qpairs", 00:05:44.984 "nvmf_subsystem_get_controllers", 00:05:44.984 "nvmf_get_stats", 00:05:44.984 "nvmf_get_transports", 00:05:44.984 "nvmf_create_transport", 00:05:44.984 "nvmf_get_targets", 00:05:44.984 "nvmf_delete_target", 00:05:44.984 "nvmf_create_target", 00:05:44.984 "nvmf_subsystem_allow_any_host", 00:05:44.984 "nvmf_subsystem_remove_host", 00:05:44.984 "nvmf_subsystem_add_host", 00:05:44.984 "nvmf_ns_remove_host", 00:05:44.984 "nvmf_ns_add_host", 00:05:44.984 "nvmf_subsystem_remove_ns", 00:05:44.984 "nvmf_subsystem_add_ns", 00:05:44.984 "nvmf_subsystem_listener_set_ana_state", 00:05:44.984 "nvmf_discovery_get_referrals", 00:05:44.984 "nvmf_discovery_remove_referral", 00:05:44.984 "nvmf_discovery_add_referral", 00:05:44.984 "nvmf_subsystem_remove_listener", 00:05:44.984 "nvmf_subsystem_add_listener", 00:05:44.984 "nvmf_delete_subsystem", 00:05:44.984 "nvmf_create_subsystem", 00:05:44.984 "nvmf_get_subsystems", 00:05:44.984 "env_dpdk_get_mem_stats", 00:05:44.984 "nbd_get_disks", 00:05:44.984 "nbd_stop_disk", 00:05:44.984 "nbd_start_disk", 00:05:44.984 "ublk_recover_disk", 00:05:44.984 "ublk_get_disks", 00:05:44.984 "ublk_stop_disk", 00:05:44.984 "ublk_start_disk", 00:05:44.984 "ublk_destroy_target", 00:05:44.985 "ublk_create_target", 00:05:44.985 "virtio_blk_create_transport", 00:05:44.985 "virtio_blk_get_transports", 00:05:44.985 "vhost_controller_set_coalescing", 00:05:44.985 "vhost_get_controllers", 00:05:44.985 "vhost_delete_controller", 00:05:44.985 "vhost_create_blk_controller", 00:05:44.985 "vhost_scsi_controller_remove_target", 00:05:44.985 "vhost_scsi_controller_add_target", 00:05:44.985 "vhost_start_scsi_controller", 00:05:44.985 "vhost_create_scsi_controller", 00:05:44.985 "thread_set_cpumask", 00:05:44.985 "framework_get_governor", 00:05:44.985 "framework_get_scheduler", 00:05:44.985 "framework_set_scheduler", 00:05:44.985 "framework_get_reactors", 00:05:44.985 "thread_get_io_channels", 00:05:44.985 "thread_get_pollers", 00:05:44.985 "thread_get_stats", 00:05:44.985 "framework_monitor_context_switch", 00:05:44.985 "spdk_kill_instance", 00:05:44.985 "log_enable_timestamps", 00:05:44.985 "log_get_flags", 00:05:44.985 "log_clear_flag", 00:05:44.985 "log_set_flag", 00:05:44.985 "log_get_level", 00:05:44.985 "log_set_level", 00:05:44.985 "log_get_print_level", 00:05:44.985 "log_set_print_level", 00:05:44.985 "framework_enable_cpumask_locks", 00:05:44.985 "framework_disable_cpumask_locks", 00:05:44.985 "framework_wait_init", 00:05:44.985 "framework_start_init", 00:05:44.985 "scsi_get_devices", 00:05:44.985 "bdev_get_histogram", 00:05:44.985 "bdev_enable_histogram", 00:05:44.985 "bdev_set_qos_limit", 00:05:44.985 "bdev_set_qd_sampling_period", 00:05:44.985 "bdev_get_bdevs", 00:05:44.985 "bdev_reset_iostat", 00:05:44.985 "bdev_get_iostat", 00:05:44.985 "bdev_examine", 00:05:44.985 "bdev_wait_for_examine", 00:05:44.985 "bdev_set_options", 00:05:44.985 "notify_get_notifications", 00:05:44.985 "notify_get_types", 00:05:44.985 "accel_get_stats", 00:05:44.985 "accel_set_options", 00:05:44.985 "accel_set_driver", 00:05:44.985 "accel_crypto_key_destroy", 00:05:44.985 "accel_crypto_keys_get", 00:05:44.985 "accel_crypto_key_create", 00:05:44.985 "accel_assign_opc", 00:05:44.985 "accel_get_module_info", 00:05:44.985 "accel_get_opc_assignments", 00:05:44.985 "vmd_rescan", 00:05:44.985 "vmd_remove_device", 00:05:44.985 "vmd_enable", 00:05:44.985 "sock_get_default_impl", 00:05:44.985 "sock_set_default_impl", 00:05:44.985 "sock_impl_set_options", 00:05:44.985 "sock_impl_get_options", 00:05:44.985 "iobuf_get_stats", 00:05:44.985 "iobuf_set_options", 00:05:44.985 "keyring_get_keys", 00:05:44.985 "framework_get_pci_devices", 00:05:44.985 "framework_get_config", 00:05:44.985 "framework_get_subsystems", 00:05:44.985 "vfu_tgt_set_base_path", 00:05:44.985 "trace_get_info", 00:05:44.985 "trace_get_tpoint_group_mask", 00:05:44.985 "trace_disable_tpoint_group", 00:05:44.985 "trace_enable_tpoint_group", 00:05:44.985 "trace_clear_tpoint_mask", 00:05:44.985 "trace_set_tpoint_mask", 00:05:44.985 "spdk_get_version", 00:05:44.985 "rpc_get_methods" 00:05:44.985 ] 00:05:44.985 19:45:32 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:44.985 19:45:32 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:44.985 19:45:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:44.985 19:45:32 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:44.985 19:45:32 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3464104 00:05:44.985 19:45:32 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 3464104 ']' 00:05:44.985 19:45:32 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 3464104 00:05:44.985 19:45:32 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:44.985 19:45:32 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:44.985 19:45:32 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3464104 00:05:44.985 19:45:32 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:44.985 19:45:32 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:44.985 19:45:32 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3464104' 00:05:44.985 killing process with pid 3464104 00:05:44.985 19:45:32 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 3464104 00:05:44.985 19:45:32 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 3464104 00:05:45.246 00:05:45.246 real 0m1.401s 00:05:45.246 user 0m2.552s 00:05:45.246 sys 0m0.436s 00:05:45.246 19:45:33 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.246 19:45:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:45.246 ************************************ 00:05:45.246 END TEST spdkcli_tcp 00:05:45.246 ************************************ 00:05:45.246 19:45:33 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:45.246 19:45:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.246 19:45:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.246 19:45:33 -- common/autotest_common.sh@10 -- # set +x 00:05:45.246 ************************************ 00:05:45.246 START TEST dpdk_mem_utility 00:05:45.246 ************************************ 00:05:45.246 19:45:33 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:45.506 * Looking for test storage... 00:05:45.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:45.507 19:45:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:45.507 19:45:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3464382 00:05:45.507 19:45:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3464382 00:05:45.507 19:45:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:45.507 19:45:33 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 3464382 ']' 00:05:45.507 19:45:33 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.507 19:45:33 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.507 19:45:33 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.507 19:45:33 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.507 19:45:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:45.507 [2024-07-24 19:45:33.271298] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:05:45.507 [2024-07-24 19:45:33.271373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3464382 ] 00:05:45.507 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.507 [2024-07-24 19:45:33.335781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.507 [2024-07-24 19:45:33.411956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.450 19:45:34 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:46.450 19:45:34 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:46.450 19:45:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:46.450 19:45:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:46.450 19:45:34 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.450 19:45:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:46.450 { 00:05:46.450 "filename": "/tmp/spdk_mem_dump.txt" 00:05:46.450 } 00:05:46.450 19:45:34 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.450 19:45:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:46.450 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:46.450 1 heaps totaling size 814.000000 MiB 00:05:46.450 size: 814.000000 MiB heap id: 0 00:05:46.450 end heaps---------- 00:05:46.450 8 mempools totaling size 598.116089 MiB 00:05:46.450 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:46.450 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:46.450 size: 84.521057 MiB name: bdev_io_3464382 00:05:46.450 size: 51.011292 MiB name: evtpool_3464382 00:05:46.450 size: 50.003479 MiB name: msgpool_3464382 00:05:46.450 size: 21.763794 MiB name: PDU_Pool 00:05:46.450 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:46.450 size: 0.026123 MiB name: Session_Pool 00:05:46.450 end mempools------- 00:05:46.450 6 memzones totaling size 4.142822 MiB 00:05:46.450 size: 1.000366 MiB name: RG_ring_0_3464382 00:05:46.450 size: 1.000366 MiB name: RG_ring_1_3464382 00:05:46.450 size: 1.000366 MiB name: RG_ring_4_3464382 00:05:46.450 size: 1.000366 MiB name: RG_ring_5_3464382 00:05:46.450 size: 0.125366 MiB name: RG_ring_2_3464382 00:05:46.450 size: 0.015991 MiB name: RG_ring_3_3464382 00:05:46.450 end memzones------- 00:05:46.450 19:45:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:46.450 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:46.450 list of free elements. size: 12.519348 MiB 00:05:46.450 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:46.450 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:46.450 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:46.450 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:46.450 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:46.450 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:46.450 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:46.450 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:46.450 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:46.450 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:46.450 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:46.450 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:46.450 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:46.450 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:46.450 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:46.450 list of standard malloc elements. size: 199.218079 MiB 00:05:46.450 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:46.450 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:46.450 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:46.450 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:46.450 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:46.450 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:46.450 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:46.450 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:46.450 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:46.450 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:46.450 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:46.450 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:46.450 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:46.450 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:46.450 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:46.450 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:46.450 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:46.450 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:46.450 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:46.450 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:46.450 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:46.450 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:46.450 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:46.450 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:46.450 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:46.450 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:46.450 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:46.450 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:46.450 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:46.450 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:46.450 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:46.450 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:46.450 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:46.450 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:46.450 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:46.450 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:46.450 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:46.450 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:46.450 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:46.450 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:46.450 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:46.450 list of memzone associated elements. size: 602.262573 MiB 00:05:46.450 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:46.450 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:46.450 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:46.450 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:46.450 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:46.450 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3464382_0 00:05:46.450 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:46.450 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3464382_0 00:05:46.450 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:46.450 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3464382_0 00:05:46.450 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:46.450 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:46.450 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:46.450 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:46.450 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:46.450 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3464382 00:05:46.450 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:46.450 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3464382 00:05:46.450 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:46.450 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3464382 00:05:46.450 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:46.450 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:46.450 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:46.450 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:46.450 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:46.450 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:46.450 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:46.450 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:46.450 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:46.450 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3464382 00:05:46.450 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:46.450 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3464382 00:05:46.450 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:46.450 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3464382 00:05:46.450 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:46.450 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3464382 00:05:46.450 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:46.450 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3464382 00:05:46.450 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:46.450 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:46.450 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:46.450 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:46.450 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:46.451 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:46.451 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:46.451 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3464382 00:05:46.451 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:46.451 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:46.451 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:46.451 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:46.451 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:46.451 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3464382 00:05:46.451 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:46.451 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:46.451 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:46.451 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3464382 00:05:46.451 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:46.451 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3464382 00:05:46.451 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:46.451 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:46.451 19:45:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:46.451 19:45:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3464382 00:05:46.451 19:45:34 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 3464382 ']' 00:05:46.451 19:45:34 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 3464382 00:05:46.451 19:45:34 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:46.451 19:45:34 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:46.451 19:45:34 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3464382 00:05:46.451 19:45:34 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:46.451 19:45:34 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:46.451 19:45:34 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3464382' 00:05:46.451 killing process with pid 3464382 00:05:46.451 19:45:34 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 3464382 00:05:46.451 19:45:34 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 3464382 00:05:46.451 00:05:46.451 real 0m1.283s 00:05:46.451 user 0m1.348s 00:05:46.451 sys 0m0.365s 00:05:46.451 19:45:34 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.451 19:45:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:46.451 ************************************ 00:05:46.451 END TEST dpdk_mem_utility 00:05:46.451 ************************************ 00:05:46.714 19:45:34 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:46.714 19:45:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.714 19:45:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.714 19:45:34 -- common/autotest_common.sh@10 -- # set +x 00:05:46.714 ************************************ 00:05:46.714 START TEST event 00:05:46.714 ************************************ 00:05:46.714 19:45:34 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:46.714 * Looking for test storage... 00:05:46.714 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:46.714 19:45:34 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:46.714 19:45:34 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:46.714 19:45:34 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:46.714 19:45:34 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:46.714 19:45:34 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.714 19:45:34 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.714 ************************************ 00:05:46.714 START TEST event_perf 00:05:46.714 ************************************ 00:05:46.714 19:45:34 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:46.714 Running I/O for 1 seconds...[2024-07-24 19:45:34.620060] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:05:46.714 [2024-07-24 19:45:34.620162] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3464752 ] 00:05:46.714 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.975 [2024-07-24 19:45:34.688611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:46.975 [2024-07-24 19:45:34.765565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.975 [2024-07-24 19:45:34.765684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.975 [2024-07-24 19:45:34.765840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.975 Running I/O for 1 seconds...[2024-07-24 19:45:34.765841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:47.918 00:05:47.918 lcore 0: 177557 00:05:47.918 lcore 1: 177558 00:05:47.918 lcore 2: 177556 00:05:47.918 lcore 3: 177559 00:05:47.918 done. 00:05:47.918 00:05:47.918 real 0m1.222s 00:05:47.918 user 0m4.144s 00:05:47.918 sys 0m0.074s 00:05:47.918 19:45:35 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.918 19:45:35 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:47.918 ************************************ 00:05:47.918 END TEST event_perf 00:05:47.918 ************************************ 00:05:47.918 19:45:35 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:47.918 19:45:35 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:47.918 19:45:35 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.918 19:45:35 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.178 ************************************ 00:05:48.178 START TEST event_reactor 00:05:48.178 ************************************ 00:05:48.179 19:45:35 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:48.179 [2024-07-24 19:45:35.902235] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:05:48.179 [2024-07-24 19:45:35.902270] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3465104 ] 00:05:48.179 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.179 [2024-07-24 19:45:35.954486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.179 [2024-07-24 19:45:36.021069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.121 test_start 00:05:49.121 oneshot 00:05:49.121 tick 100 00:05:49.121 tick 100 00:05:49.121 tick 250 00:05:49.121 tick 100 00:05:49.121 tick 100 00:05:49.121 tick 100 00:05:49.121 tick 250 00:05:49.121 tick 500 00:05:49.121 tick 100 00:05:49.121 tick 100 00:05:49.121 tick 250 00:05:49.121 tick 100 00:05:49.121 tick 100 00:05:49.121 test_end 00:05:49.121 00:05:49.121 real 0m1.177s 00:05:49.121 user 0m1.116s 00:05:49.121 sys 0m0.057s 00:05:49.121 19:45:37 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.121 19:45:37 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:49.121 ************************************ 00:05:49.121 END TEST event_reactor 00:05:49.121 ************************************ 00:05:49.382 19:45:37 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:49.382 19:45:37 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:49.382 19:45:37 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.382 19:45:37 event -- common/autotest_common.sh@10 -- # set +x 00:05:49.382 ************************************ 00:05:49.382 START TEST event_reactor_perf 00:05:49.382 ************************************ 00:05:49.382 19:45:37 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:49.382 [2024-07-24 19:45:37.167034] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:05:49.382 [2024-07-24 19:45:37.167137] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3465453 ] 00:05:49.382 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.382 [2024-07-24 19:45:37.231087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.382 [2024-07-24 19:45:37.298642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.767 test_start 00:05:50.767 test_end 00:05:50.767 Performance: 366022 events per second 00:05:50.767 00:05:50.767 real 0m1.205s 00:05:50.767 user 0m1.140s 00:05:50.767 sys 0m0.062s 00:05:50.767 19:45:38 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.767 19:45:38 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:50.767 ************************************ 00:05:50.767 END TEST event_reactor_perf 00:05:50.767 ************************************ 00:05:50.767 19:45:38 event -- event/event.sh@49 -- # uname -s 00:05:50.767 19:45:38 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:50.767 19:45:38 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:50.767 19:45:38 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.767 19:45:38 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.767 19:45:38 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.767 ************************************ 00:05:50.767 START TEST event_scheduler 00:05:50.767 ************************************ 00:05:50.767 19:45:38 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:50.767 * Looking for test storage... 00:05:50.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:50.767 19:45:38 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:50.767 19:45:38 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3465688 00:05:50.767 19:45:38 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:50.767 19:45:38 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:50.767 19:45:38 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3465688 00:05:50.767 19:45:38 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 3465688 ']' 00:05:50.767 19:45:38 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.767 19:45:38 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.767 19:45:38 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.768 19:45:38 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.768 19:45:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:50.768 [2024-07-24 19:45:38.592886] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:05:50.768 [2024-07-24 19:45:38.592973] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3465688 ] 00:05:50.768 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.768 [2024-07-24 19:45:38.649250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:50.768 [2024-07-24 19:45:38.716474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.768 [2024-07-24 19:45:38.716632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.768 [2024-07-24 19:45:38.716788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.768 [2024-07-24 19:45:38.716789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:51.710 19:45:39 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:51.710 19:45:39 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:51.710 19:45:39 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:51.710 19:45:39 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.710 19:45:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:51.710 [2024-07-24 19:45:39.374835] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:51.710 [2024-07-24 19:45:39.374848] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:51.710 [2024-07-24 19:45:39.374855] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:51.710 [2024-07-24 19:45:39.374859] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:51.710 [2024-07-24 19:45:39.374863] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:51.710 19:45:39 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.710 19:45:39 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:51.710 19:45:39 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.710 19:45:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:51.710 [2024-07-24 19:45:39.429115] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:51.710 19:45:39 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.710 19:45:39 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:51.710 19:45:39 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.710 19:45:39 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.710 19:45:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:51.710 ************************************ 00:05:51.710 START TEST scheduler_create_thread 00:05:51.710 ************************************ 00:05:51.710 19:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:51.710 19:45:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:51.710 19:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.710 19:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.710 2 00:05:51.710 19:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.710 19:45:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:51.710 19:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.710 19:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.710 3 00:05:51.710 19:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.710 19:45:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:51.710 19:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.710 19:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.710 4 00:05:51.710 19:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.710 19:45:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:51.710 19:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.710 19:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.710 5 00:05:51.710 19:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.710 19:45:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:51.710 19:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.710 19:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.710 6 00:05:51.710 19:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.710 19:45:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:51.710 19:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.710 19:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.710 7 00:05:51.710 19:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.710 19:45:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:51.710 19:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.710 19:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.710 8 00:05:51.710 19:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.710 19:45:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:51.710 19:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.710 19:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.710 9 00:05:51.710 19:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.710 19:45:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:51.710 19:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.710 19:45:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.282 10 00:05:52.282 19:45:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:52.282 19:45:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:52.282 19:45:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.282 19:45:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.667 19:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.667 19:45:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:53.667 19:45:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:53.667 19:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.667 19:45:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.276 19:45:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.276 19:45:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:54.276 19:45:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.276 19:45:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.217 19:45:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.217 19:45:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:55.217 19:45:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:55.217 19:45:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.217 19:45:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.788 19:45:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.788 00:05:55.788 real 0m4.223s 00:05:55.788 user 0m0.022s 00:05:55.788 sys 0m0.009s 00:05:55.788 19:45:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.788 19:45:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.788 ************************************ 00:05:55.788 END TEST scheduler_create_thread 00:05:55.788 ************************************ 00:05:55.788 19:45:43 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:55.788 19:45:43 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3465688 00:05:55.788 19:45:43 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 3465688 ']' 00:05:55.788 19:45:43 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 3465688 00:05:55.788 19:45:43 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:55.788 19:45:43 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:55.788 19:45:43 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3465688 00:05:56.048 19:45:43 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:56.048 19:45:43 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:56.048 19:45:43 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3465688' 00:05:56.048 killing process with pid 3465688 00:05:56.048 19:45:43 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 3465688 00:05:56.048 19:45:43 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 3465688 00:05:56.048 [2024-07-24 19:45:43.970367] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:56.309 00:05:56.309 real 0m5.707s 00:05:56.309 user 0m12.727s 00:05:56.309 sys 0m0.360s 00:05:56.309 19:45:44 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.309 19:45:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:56.309 ************************************ 00:05:56.309 END TEST event_scheduler 00:05:56.309 ************************************ 00:05:56.309 19:45:44 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:56.309 19:45:44 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:56.309 19:45:44 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.309 19:45:44 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.309 19:45:44 event -- common/autotest_common.sh@10 -- # set +x 00:05:56.309 ************************************ 00:05:56.309 START TEST app_repeat 00:05:56.309 ************************************ 00:05:56.309 19:45:44 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:56.309 19:45:44 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.309 19:45:44 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.309 19:45:44 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:56.309 19:45:44 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:56.309 19:45:44 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:56.309 19:45:44 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:56.309 19:45:44 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:56.309 19:45:44 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3466902 00:05:56.309 19:45:44 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:56.309 19:45:44 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:56.309 19:45:44 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3466902' 00:05:56.309 Process app_repeat pid: 3466902 00:05:56.309 19:45:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:56.309 19:45:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:56.309 spdk_app_start Round 0 00:05:56.309 19:45:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3466902 /var/tmp/spdk-nbd.sock 00:05:56.309 19:45:44 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3466902 ']' 00:05:56.309 19:45:44 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:56.309 19:45:44 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:56.309 19:45:44 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:56.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:56.309 19:45:44 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:56.309 19:45:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:56.309 [2024-07-24 19:45:44.259773] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:05:56.309 [2024-07-24 19:45:44.259832] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3466902 ] 00:05:56.570 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.570 [2024-07-24 19:45:44.319955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:56.570 [2024-07-24 19:45:44.385295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.570 [2024-07-24 19:45:44.385423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.141 19:45:45 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:57.141 19:45:45 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:57.141 19:45:45 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:57.402 Malloc0 00:05:57.402 19:45:45 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:57.664 Malloc1 00:05:57.664 19:45:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:57.664 19:45:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.664 19:45:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:57.664 19:45:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:57.664 19:45:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.664 19:45:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:57.664 19:45:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:57.664 19:45:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.664 19:45:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:57.664 19:45:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:57.664 19:45:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.664 19:45:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:57.664 19:45:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:57.664 19:45:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:57.664 19:45:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.664 19:45:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:57.664 /dev/nbd0 00:05:57.664 19:45:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:57.664 19:45:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:57.664 19:45:45 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:57.664 19:45:45 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:57.664 19:45:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:57.664 19:45:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:57.664 19:45:45 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:57.664 19:45:45 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:57.664 19:45:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:57.664 19:45:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:57.664 19:45:45 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:57.664 1+0 records in 00:05:57.664 1+0 records out 00:05:57.664 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233171 s, 17.6 MB/s 00:05:57.664 19:45:45 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:57.664 19:45:45 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:57.664 19:45:45 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:57.664 19:45:45 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:57.664 19:45:45 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:57.664 19:45:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.664 19:45:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.664 19:45:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:57.925 /dev/nbd1 00:05:57.925 19:45:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:57.925 19:45:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:57.925 19:45:45 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:57.925 19:45:45 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:57.925 19:45:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:57.925 19:45:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:57.925 19:45:45 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:57.925 19:45:45 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:57.925 19:45:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:57.925 19:45:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:57.925 19:45:45 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:57.925 1+0 records in 00:05:57.925 1+0 records out 00:05:57.925 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000308845 s, 13.3 MB/s 00:05:57.925 19:45:45 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:57.925 19:45:45 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:57.925 19:45:45 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:57.925 19:45:45 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:57.925 19:45:45 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:57.925 19:45:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.925 19:45:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.925 19:45:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.925 19:45:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.925 19:45:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.186 19:45:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:58.186 { 00:05:58.186 "nbd_device": "/dev/nbd0", 00:05:58.186 "bdev_name": "Malloc0" 00:05:58.186 }, 00:05:58.186 { 00:05:58.186 "nbd_device": "/dev/nbd1", 00:05:58.186 "bdev_name": "Malloc1" 00:05:58.186 } 00:05:58.186 ]' 00:05:58.186 19:45:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:58.186 { 00:05:58.186 "nbd_device": "/dev/nbd0", 00:05:58.186 "bdev_name": "Malloc0" 00:05:58.186 }, 00:05:58.186 { 00:05:58.186 "nbd_device": "/dev/nbd1", 00:05:58.186 "bdev_name": "Malloc1" 00:05:58.186 } 00:05:58.186 ]' 00:05:58.186 19:45:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:58.186 19:45:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:58.186 /dev/nbd1' 00:05:58.186 19:45:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:58.186 /dev/nbd1' 00:05:58.186 19:45:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:58.186 19:45:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:58.186 19:45:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:58.186 19:45:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:58.186 19:45:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:58.186 19:45:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:58.186 19:45:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.186 19:45:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.186 19:45:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:58.186 19:45:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:58.186 19:45:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:58.186 19:45:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:58.186 256+0 records in 00:05:58.186 256+0 records out 00:05:58.186 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116748 s, 89.8 MB/s 00:05:58.186 19:45:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.186 19:45:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:58.186 256+0 records in 00:05:58.186 256+0 records out 00:05:58.186 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0158109 s, 66.3 MB/s 00:05:58.186 19:45:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.186 19:45:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:58.186 256+0 records in 00:05:58.186 256+0 records out 00:05:58.186 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.016978 s, 61.8 MB/s 00:05:58.186 19:45:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:58.186 19:45:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.186 19:45:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.186 19:45:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:58.186 19:45:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:58.186 19:45:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:58.186 19:45:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:58.186 19:45:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.187 19:45:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:58.187 19:45:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.187 19:45:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:58.187 19:45:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:58.187 19:45:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:58.187 19:45:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.187 19:45:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.187 19:45:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:58.187 19:45:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:58.187 19:45:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.187 19:45:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:58.448 19:45:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:58.448 19:45:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:58.448 19:45:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:58.448 19:45:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.448 19:45:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.448 19:45:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:58.448 19:45:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:58.448 19:45:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.448 19:45:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.448 19:45:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:58.448 19:45:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:58.448 19:45:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:58.448 19:45:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:58.448 19:45:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.448 19:45:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.448 19:45:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:58.709 19:45:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:58.709 19:45:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.709 19:45:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.709 19:45:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.709 19:45:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.709 19:45:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:58.709 19:45:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:58.709 19:45:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:58.709 19:45:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:58.709 19:45:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:58.709 19:45:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:58.709 19:45:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:58.709 19:45:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:58.709 19:45:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:58.709 19:45:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:58.709 19:45:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:58.709 19:45:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:58.709 19:45:46 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:58.970 19:45:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:58.970 [2024-07-24 19:45:46.910608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.231 [2024-07-24 19:45:46.974137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.231 [2024-07-24 19:45:46.974140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.231 [2024-07-24 19:45:47.005496] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:59.231 [2024-07-24 19:45:47.005531] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:02.534 19:45:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:02.534 19:45:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:02.534 spdk_app_start Round 1 00:06:02.534 19:45:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3466902 /var/tmp/spdk-nbd.sock 00:06:02.534 19:45:49 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3466902 ']' 00:06:02.534 19:45:49 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:02.534 19:45:49 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.534 19:45:49 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:02.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:02.534 19:45:49 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.534 19:45:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:02.534 19:45:49 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.534 19:45:49 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:02.534 19:45:49 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:02.534 Malloc0 00:06:02.534 19:45:50 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:02.534 Malloc1 00:06:02.534 19:45:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.534 19:45:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.534 19:45:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.534 19:45:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:02.534 19:45:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.534 19:45:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:02.534 19:45:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.534 19:45:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.534 19:45:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.534 19:45:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:02.534 19:45:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.534 19:45:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:02.534 19:45:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:02.534 19:45:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:02.534 19:45:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.534 19:45:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:02.534 /dev/nbd0 00:06:02.534 19:45:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:02.534 19:45:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:02.534 19:45:50 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:02.534 19:45:50 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:02.534 19:45:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:02.534 19:45:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:02.534 19:45:50 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:02.534 19:45:50 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:02.534 19:45:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:02.534 19:45:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:02.534 19:45:50 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.534 1+0 records in 00:06:02.534 1+0 records out 00:06:02.534 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223583 s, 18.3 MB/s 00:06:02.534 19:45:50 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:02.796 19:45:50 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:02.796 19:45:50 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:02.796 19:45:50 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:02.796 19:45:50 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:02.796 19:45:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.796 19:45:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.796 19:45:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:02.796 /dev/nbd1 00:06:02.796 19:45:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:02.796 19:45:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:02.796 19:45:50 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:02.796 19:45:50 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:02.796 19:45:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:02.796 19:45:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:02.796 19:45:50 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:02.796 19:45:50 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:02.796 19:45:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:02.796 19:45:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:02.796 19:45:50 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.796 1+0 records in 00:06:02.796 1+0 records out 00:06:02.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281235 s, 14.6 MB/s 00:06:02.796 19:45:50 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:02.796 19:45:50 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:02.796 19:45:50 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:02.796 19:45:50 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:02.796 19:45:50 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:02.796 19:45:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.796 19:45:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.796 19:45:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.796 19:45:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.796 19:45:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.057 19:45:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:03.057 { 00:06:03.057 "nbd_device": "/dev/nbd0", 00:06:03.057 "bdev_name": "Malloc0" 00:06:03.057 }, 00:06:03.057 { 00:06:03.057 "nbd_device": "/dev/nbd1", 00:06:03.057 "bdev_name": "Malloc1" 00:06:03.057 } 00:06:03.057 ]' 00:06:03.057 19:45:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:03.057 { 00:06:03.057 "nbd_device": "/dev/nbd0", 00:06:03.057 "bdev_name": "Malloc0" 00:06:03.057 }, 00:06:03.057 { 00:06:03.057 "nbd_device": "/dev/nbd1", 00:06:03.057 "bdev_name": "Malloc1" 00:06:03.057 } 00:06:03.057 ]' 00:06:03.057 19:45:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.057 19:45:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:03.057 /dev/nbd1' 00:06:03.057 19:45:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.057 19:45:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:03.057 /dev/nbd1' 00:06:03.057 19:45:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:03.057 19:45:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:03.057 19:45:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:03.057 19:45:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:03.057 19:45:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:03.057 19:45:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.057 19:45:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.057 19:45:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:03.057 19:45:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:03.057 19:45:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:03.057 19:45:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:03.057 256+0 records in 00:06:03.057 256+0 records out 00:06:03.057 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118606 s, 88.4 MB/s 00:06:03.058 19:45:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.058 19:45:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:03.058 256+0 records in 00:06:03.058 256+0 records out 00:06:03.058 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0160641 s, 65.3 MB/s 00:06:03.058 19:45:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.058 19:45:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:03.058 256+0 records in 00:06:03.058 256+0 records out 00:06:03.058 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0167197 s, 62.7 MB/s 00:06:03.058 19:45:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:03.058 19:45:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.058 19:45:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.058 19:45:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:03.058 19:45:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:03.058 19:45:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:03.058 19:45:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:03.058 19:45:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.058 19:45:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:03.058 19:45:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.058 19:45:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:03.058 19:45:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:03.058 19:45:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:03.058 19:45:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.058 19:45:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.058 19:45:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:03.058 19:45:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:03.058 19:45:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.058 19:45:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:03.319 19:45:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:03.319 19:45:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:03.319 19:45:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:03.319 19:45:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.319 19:45:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.319 19:45:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:03.319 19:45:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:03.319 19:45:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.319 19:45:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.319 19:45:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:03.579 19:45:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:03.579 19:45:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:03.580 19:45:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:03.580 19:45:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.580 19:45:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.580 19:45:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:03.580 19:45:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:03.580 19:45:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.580 19:45:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.580 19:45:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.580 19:45:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.580 19:45:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:03.580 19:45:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:03.580 19:45:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.580 19:45:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:03.580 19:45:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:03.580 19:45:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.580 19:45:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:03.580 19:45:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:03.580 19:45:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:03.580 19:45:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:03.580 19:45:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:03.580 19:45:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:03.580 19:45:51 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:03.840 19:45:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:04.105 [2024-07-24 19:45:51.809243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.105 [2024-07-24 19:45:51.873385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.105 [2024-07-24 19:45:51.873474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.105 [2024-07-24 19:45:51.905507] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:04.105 [2024-07-24 19:45:51.905542] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:07.408 19:45:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:07.408 19:45:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:07.408 spdk_app_start Round 2 00:06:07.408 19:45:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3466902 /var/tmp/spdk-nbd.sock 00:06:07.408 19:45:54 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3466902 ']' 00:06:07.408 19:45:54 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:07.408 19:45:54 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.408 19:45:54 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:07.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:07.408 19:45:54 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.408 19:45:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:07.408 19:45:54 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.408 19:45:54 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:07.408 19:45:54 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:07.408 Malloc0 00:06:07.408 19:45:54 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:07.408 Malloc1 00:06:07.408 19:45:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:07.408 19:45:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.408 19:45:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.408 19:45:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:07.408 19:45:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.408 19:45:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:07.408 19:45:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:07.408 19:45:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.408 19:45:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.408 19:45:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:07.408 19:45:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.408 19:45:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:07.408 19:45:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:07.408 19:45:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:07.408 19:45:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.408 19:45:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:07.408 /dev/nbd0 00:06:07.408 19:45:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:07.408 19:45:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:07.408 19:45:55 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:07.408 19:45:55 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:07.408 19:45:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:07.408 19:45:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:07.408 19:45:55 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:07.408 19:45:55 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:07.408 19:45:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:07.408 19:45:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:07.408 19:45:55 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.408 1+0 records in 00:06:07.408 1+0 records out 00:06:07.408 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277379 s, 14.8 MB/s 00:06:07.408 19:45:55 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:07.408 19:45:55 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:07.408 19:45:55 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:07.408 19:45:55 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:07.408 19:45:55 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:07.408 19:45:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.408 19:45:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.408 19:45:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:07.670 /dev/nbd1 00:06:07.670 19:45:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:07.670 19:45:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:07.670 19:45:55 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:07.670 19:45:55 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:07.670 19:45:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:07.670 19:45:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:07.670 19:45:55 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:07.670 19:45:55 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:07.670 19:45:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:07.670 19:45:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:07.670 19:45:55 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.670 1+0 records in 00:06:07.670 1+0 records out 00:06:07.670 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025449 s, 16.1 MB/s 00:06:07.670 19:45:55 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:07.670 19:45:55 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:07.670 19:45:55 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:07.670 19:45:55 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:07.670 19:45:55 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:07.670 19:45:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.670 19:45:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.670 19:45:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.670 19:45:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.670 19:45:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.932 19:45:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:07.932 { 00:06:07.932 "nbd_device": "/dev/nbd0", 00:06:07.932 "bdev_name": "Malloc0" 00:06:07.932 }, 00:06:07.932 { 00:06:07.932 "nbd_device": "/dev/nbd1", 00:06:07.932 "bdev_name": "Malloc1" 00:06:07.932 } 00:06:07.932 ]' 00:06:07.932 19:45:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:07.932 { 00:06:07.932 "nbd_device": "/dev/nbd0", 00:06:07.932 "bdev_name": "Malloc0" 00:06:07.932 }, 00:06:07.932 { 00:06:07.932 "nbd_device": "/dev/nbd1", 00:06:07.932 "bdev_name": "Malloc1" 00:06:07.932 } 00:06:07.932 ]' 00:06:07.932 19:45:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.932 19:45:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:07.932 /dev/nbd1' 00:06:07.932 19:45:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:07.932 /dev/nbd1' 00:06:07.932 19:45:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.932 19:45:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:07.932 19:45:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:07.932 19:45:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:07.932 19:45:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:07.932 19:45:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:07.932 19:45:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.932 19:45:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.932 19:45:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:07.932 19:45:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:07.932 19:45:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:07.932 19:45:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:07.932 256+0 records in 00:06:07.932 256+0 records out 00:06:07.932 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116005 s, 90.4 MB/s 00:06:07.932 19:45:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.932 19:45:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:07.932 256+0 records in 00:06:07.932 256+0 records out 00:06:07.932 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0162259 s, 64.6 MB/s 00:06:07.932 19:45:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.932 19:45:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:07.932 256+0 records in 00:06:07.932 256+0 records out 00:06:07.932 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0171255 s, 61.2 MB/s 00:06:07.932 19:45:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:07.932 19:45:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.932 19:45:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.932 19:45:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:07.932 19:45:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:07.932 19:45:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:07.932 19:45:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:07.932 19:45:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.932 19:45:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:07.932 19:45:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.932 19:45:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:07.932 19:45:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:07.932 19:45:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:07.932 19:45:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.932 19:45:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.932 19:45:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:07.932 19:45:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:07.932 19:45:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.932 19:45:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:08.193 19:45:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:08.193 19:45:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:08.193 19:45:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:08.193 19:45:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.193 19:45:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.193 19:45:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:08.193 19:45:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:08.193 19:45:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.193 19:45:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.193 19:45:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:08.454 19:45:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:08.454 19:45:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:08.454 19:45:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:08.454 19:45:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.454 19:45:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.454 19:45:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:08.454 19:45:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:08.454 19:45:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.454 19:45:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.454 19:45:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.454 19:45:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.454 19:45:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:08.455 19:45:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:08.455 19:45:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.455 19:45:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:08.455 19:45:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.455 19:45:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:08.455 19:45:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:08.455 19:45:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:08.455 19:45:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:08.455 19:45:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:08.455 19:45:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:08.455 19:45:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:08.455 19:45:56 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:08.716 19:45:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:08.977 [2024-07-24 19:45:56.688828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:08.977 [2024-07-24 19:45:56.751993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.977 [2024-07-24 19:45:56.751995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.977 [2024-07-24 19:45:56.783361] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:08.977 [2024-07-24 19:45:56.783400] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:12.280 19:45:59 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3466902 /var/tmp/spdk-nbd.sock 00:06:12.280 19:45:59 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3466902 ']' 00:06:12.280 19:45:59 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:12.280 19:45:59 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.280 19:45:59 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:12.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:12.280 19:45:59 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.280 19:45:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:12.280 19:45:59 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.280 19:45:59 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:12.280 19:45:59 event.app_repeat -- event/event.sh@39 -- # killprocess 3466902 00:06:12.280 19:45:59 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 3466902 ']' 00:06:12.280 19:45:59 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 3466902 00:06:12.280 19:45:59 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:12.280 19:45:59 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:12.280 19:45:59 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3466902 00:06:12.280 19:45:59 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:12.280 19:45:59 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:12.280 19:45:59 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3466902' 00:06:12.280 killing process with pid 3466902 00:06:12.280 19:45:59 event.app_repeat -- common/autotest_common.sh@969 -- # kill 3466902 00:06:12.281 19:45:59 event.app_repeat -- common/autotest_common.sh@974 -- # wait 3466902 00:06:12.281 spdk_app_start is called in Round 0. 00:06:12.281 Shutdown signal received, stop current app iteration 00:06:12.281 Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 reinitialization... 00:06:12.281 spdk_app_start is called in Round 1. 00:06:12.281 Shutdown signal received, stop current app iteration 00:06:12.281 Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 reinitialization... 00:06:12.281 spdk_app_start is called in Round 2. 00:06:12.281 Shutdown signal received, stop current app iteration 00:06:12.281 Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 reinitialization... 00:06:12.281 spdk_app_start is called in Round 3. 00:06:12.281 Shutdown signal received, stop current app iteration 00:06:12.281 19:45:59 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:12.281 19:45:59 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:12.281 00:06:12.281 real 0m15.656s 00:06:12.281 user 0m33.722s 00:06:12.281 sys 0m2.170s 00:06:12.281 19:45:59 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.281 19:45:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:12.281 ************************************ 00:06:12.281 END TEST app_repeat 00:06:12.281 ************************************ 00:06:12.281 19:45:59 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:12.281 19:45:59 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:12.281 19:45:59 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:12.281 19:45:59 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.281 19:45:59 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.281 ************************************ 00:06:12.281 START TEST cpu_locks 00:06:12.281 ************************************ 00:06:12.281 19:45:59 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:12.281 * Looking for test storage... 00:06:12.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:12.281 19:46:00 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:12.281 19:46:00 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:12.281 19:46:00 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:12.281 19:46:00 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:12.281 19:46:00 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:12.281 19:46:00 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.281 19:46:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.281 ************************************ 00:06:12.281 START TEST default_locks 00:06:12.281 ************************************ 00:06:12.281 19:46:00 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:12.281 19:46:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3470229 00:06:12.281 19:46:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3470229 00:06:12.281 19:46:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:12.281 19:46:00 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 3470229 ']' 00:06:12.281 19:46:00 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.281 19:46:00 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.281 19:46:00 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.281 19:46:00 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.281 19:46:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.281 [2024-07-24 19:46:00.152611] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:06:12.281 [2024-07-24 19:46:00.152681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3470229 ] 00:06:12.281 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.281 [2024-07-24 19:46:00.218633] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.597 [2024-07-24 19:46:00.295425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.185 19:46:00 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:13.185 19:46:00 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:13.185 19:46:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3470229 00:06:13.185 19:46:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.185 19:46:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3470229 00:06:13.445 lslocks: write error 00:06:13.445 19:46:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3470229 00:06:13.445 19:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 3470229 ']' 00:06:13.445 19:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 3470229 00:06:13.445 19:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:13.445 19:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.445 19:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3470229 00:06:13.445 19:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:13.445 19:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:13.445 19:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3470229' 00:06:13.445 killing process with pid 3470229 00:06:13.445 19:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 3470229 00:06:13.445 19:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 3470229 00:06:13.706 19:46:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3470229 00:06:13.706 19:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:13.706 19:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3470229 00:06:13.706 19:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:13.706 19:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.706 19:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:13.706 19:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.706 19:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 3470229 00:06:13.706 19:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 3470229 ']' 00:06:13.706 19:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.706 19:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:13.706 19:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.706 19:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:13.706 19:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3470229) - No such process 00:06:13.706 ERROR: process (pid: 3470229) is no longer running 00:06:13.706 19:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:13.706 19:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:13.706 19:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:13.706 19:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:13.707 19:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:13.707 19:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:13.707 19:46:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:13.707 19:46:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:13.707 19:46:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:13.707 19:46:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:13.707 00:06:13.707 real 0m1.465s 00:06:13.707 user 0m1.543s 00:06:13.707 sys 0m0.499s 00:06:13.707 19:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.707 19:46:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.707 ************************************ 00:06:13.707 END TEST default_locks 00:06:13.707 ************************************ 00:06:13.707 19:46:01 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:13.707 19:46:01 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:13.707 19:46:01 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.707 19:46:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.707 ************************************ 00:06:13.707 START TEST default_locks_via_rpc 00:06:13.707 ************************************ 00:06:13.707 19:46:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:13.707 19:46:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3470550 00:06:13.707 19:46:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3470550 00:06:13.707 19:46:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.707 19:46:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3470550 ']' 00:06:13.707 19:46:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.707 19:46:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:13.707 19:46:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.707 19:46:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:13.707 19:46:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.967 [2024-07-24 19:46:01.688076] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:06:13.967 [2024-07-24 19:46:01.688125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3470550 ] 00:06:13.968 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.968 [2024-07-24 19:46:01.746956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.968 [2024-07-24 19:46:01.811866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.539 19:46:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.539 19:46:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:14.539 19:46:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:14.539 19:46:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.539 19:46:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.539 19:46:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.539 19:46:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:14.539 19:46:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:14.539 19:46:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:14.539 19:46:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:14.539 19:46:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:14.539 19:46:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.539 19:46:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.539 19:46:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.539 19:46:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3470550 00:06:14.539 19:46:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3470550 00:06:14.539 19:46:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.112 19:46:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3470550 00:06:15.112 19:46:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 3470550 ']' 00:06:15.112 19:46:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 3470550 00:06:15.112 19:46:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:15.112 19:46:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:15.112 19:46:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3470550 00:06:15.112 19:46:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:15.112 19:46:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:15.112 19:46:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3470550' 00:06:15.112 killing process with pid 3470550 00:06:15.112 19:46:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 3470550 00:06:15.112 19:46:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 3470550 00:06:15.373 00:06:15.373 real 0m1.574s 00:06:15.373 user 0m1.662s 00:06:15.373 sys 0m0.521s 00:06:15.373 19:46:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.373 19:46:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.373 ************************************ 00:06:15.373 END TEST default_locks_via_rpc 00:06:15.373 ************************************ 00:06:15.373 19:46:03 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:15.373 19:46:03 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.373 19:46:03 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.373 19:46:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.373 ************************************ 00:06:15.373 START TEST non_locking_app_on_locked_coremask 00:06:15.373 ************************************ 00:06:15.373 19:46:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:15.373 19:46:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3470903 00:06:15.373 19:46:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3470903 /var/tmp/spdk.sock 00:06:15.373 19:46:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.373 19:46:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3470903 ']' 00:06:15.373 19:46:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.373 19:46:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.373 19:46:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.373 19:46:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.373 19:46:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.647 [2024-07-24 19:46:03.336270] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:06:15.647 [2024-07-24 19:46:03.336321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3470903 ] 00:06:15.647 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.647 [2024-07-24 19:46:03.397819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.647 [2024-07-24 19:46:03.469962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.220 19:46:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:16.220 19:46:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:16.220 19:46:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3471235 00:06:16.220 19:46:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3471235 /var/tmp/spdk2.sock 00:06:16.220 19:46:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:16.220 19:46:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3471235 ']' 00:06:16.220 19:46:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:16.220 19:46:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.220 19:46:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:16.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:16.220 19:46:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.220 19:46:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.220 [2024-07-24 19:46:04.163234] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:06:16.220 [2024-07-24 19:46:04.163286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3471235 ] 00:06:16.481 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.481 [2024-07-24 19:46:04.251415] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:16.481 [2024-07-24 19:46:04.251440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.481 [2024-07-24 19:46:04.380742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.053 19:46:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.053 19:46:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:17.053 19:46:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3470903 00:06:17.053 19:46:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.053 19:46:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3470903 00:06:17.624 lslocks: write error 00:06:17.624 19:46:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3470903 00:06:17.624 19:46:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3470903 ']' 00:06:17.624 19:46:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3470903 00:06:17.624 19:46:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:17.624 19:46:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:17.624 19:46:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3470903 00:06:17.624 19:46:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:17.624 19:46:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:17.624 19:46:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3470903' 00:06:17.624 killing process with pid 3470903 00:06:17.624 19:46:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3470903 00:06:17.885 19:46:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3470903 00:06:18.145 19:46:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3471235 00:06:18.145 19:46:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3471235 ']' 00:06:18.145 19:46:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3471235 00:06:18.145 19:46:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:18.145 19:46:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:18.145 19:46:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3471235 00:06:18.145 19:46:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:18.145 19:46:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:18.145 19:46:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3471235' 00:06:18.145 killing process with pid 3471235 00:06:18.145 19:46:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3471235 00:06:18.145 19:46:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3471235 00:06:18.406 00:06:18.406 real 0m2.975s 00:06:18.406 user 0m3.264s 00:06:18.406 sys 0m0.870s 00:06:18.406 19:46:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.406 19:46:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.406 ************************************ 00:06:18.406 END TEST non_locking_app_on_locked_coremask 00:06:18.406 ************************************ 00:06:18.406 19:46:06 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:18.406 19:46:06 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.406 19:46:06 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.406 19:46:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.406 ************************************ 00:06:18.406 START TEST locking_app_on_unlocked_coremask 00:06:18.406 ************************************ 00:06:18.406 19:46:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:18.406 19:46:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3471609 00:06:18.406 19:46:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3471609 /var/tmp/spdk.sock 00:06:18.406 19:46:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:18.406 19:46:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3471609 ']' 00:06:18.406 19:46:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.406 19:46:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:18.406 19:46:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.406 19:46:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:18.406 19:46:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.667 [2024-07-24 19:46:06.392068] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:06:18.667 [2024-07-24 19:46:06.392115] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3471609 ] 00:06:18.667 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.667 [2024-07-24 19:46:06.452220] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:18.667 [2024-07-24 19:46:06.452245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.667 [2024-07-24 19:46:06.516445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.238 19:46:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.238 19:46:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:19.238 19:46:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:19.238 19:46:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3471861 00:06:19.238 19:46:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3471861 /var/tmp/spdk2.sock 00:06:19.238 19:46:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3471861 ']' 00:06:19.238 19:46:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.239 19:46:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:19.239 19:46:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.239 19:46:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:19.239 19:46:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.239 [2024-07-24 19:46:07.183590] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:06:19.239 [2024-07-24 19:46:07.183641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3471861 ] 00:06:19.499 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.499 [2024-07-24 19:46:07.271557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.499 [2024-07-24 19:46:07.400829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.069 19:46:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:20.069 19:46:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:20.069 19:46:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3471861 00:06:20.069 19:46:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3471861 00:06:20.069 19:46:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.641 lslocks: write error 00:06:20.641 19:46:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3471609 00:06:20.641 19:46:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3471609 ']' 00:06:20.641 19:46:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 3471609 00:06:20.641 19:46:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:20.641 19:46:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:20.641 19:46:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3471609 00:06:20.641 19:46:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:20.641 19:46:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:20.641 19:46:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3471609' 00:06:20.641 killing process with pid 3471609 00:06:20.641 19:46:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 3471609 00:06:20.641 19:46:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 3471609 00:06:21.213 19:46:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3471861 00:06:21.213 19:46:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3471861 ']' 00:06:21.213 19:46:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 3471861 00:06:21.213 19:46:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:21.213 19:46:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:21.213 19:46:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3471861 00:06:21.213 19:46:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:21.213 19:46:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:21.213 19:46:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3471861' 00:06:21.213 killing process with pid 3471861 00:06:21.213 19:46:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 3471861 00:06:21.213 19:46:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 3471861 00:06:21.213 00:06:21.213 real 0m2.834s 00:06:21.213 user 0m3.116s 00:06:21.213 sys 0m0.798s 00:06:21.213 19:46:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.213 19:46:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.213 ************************************ 00:06:21.213 END TEST locking_app_on_unlocked_coremask 00:06:21.213 ************************************ 00:06:21.474 19:46:09 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:21.474 19:46:09 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.474 19:46:09 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.474 19:46:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.474 ************************************ 00:06:21.474 START TEST locking_app_on_locked_coremask 00:06:21.474 ************************************ 00:06:21.474 19:46:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:21.474 19:46:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3472318 00:06:21.474 19:46:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3472318 /var/tmp/spdk.sock 00:06:21.474 19:46:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:21.474 19:46:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3472318 ']' 00:06:21.474 19:46:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.474 19:46:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.474 19:46:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.474 19:46:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.474 19:46:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.474 [2024-07-24 19:46:09.291637] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:06:21.474 [2024-07-24 19:46:09.291687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3472318 ] 00:06:21.474 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.474 [2024-07-24 19:46:09.351921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.474 [2024-07-24 19:46:09.422662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.416 19:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.416 19:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:22.416 19:46:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3472347 00:06:22.416 19:46:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3472347 /var/tmp/spdk2.sock 00:06:22.416 19:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:22.416 19:46:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:22.416 19:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3472347 /var/tmp/spdk2.sock 00:06:22.416 19:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:22.416 19:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.416 19:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:22.416 19:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.416 19:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3472347 /var/tmp/spdk2.sock 00:06:22.416 19:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3472347 ']' 00:06:22.416 19:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.416 19:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.416 19:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.416 19:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.416 19:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.416 [2024-07-24 19:46:10.107569] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:06:22.416 [2024-07-24 19:46:10.107625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3472347 ] 00:06:22.416 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.416 [2024-07-24 19:46:10.197347] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3472318 has claimed it. 00:06:22.416 [2024-07-24 19:46:10.197388] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:22.988 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3472347) - No such process 00:06:22.988 ERROR: process (pid: 3472347) is no longer running 00:06:22.988 19:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.988 19:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:22.988 19:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:22.988 19:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:22.988 19:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:22.988 19:46:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:22.988 19:46:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3472318 00:06:22.988 19:46:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3472318 00:06:22.988 19:46:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:23.248 lslocks: write error 00:06:23.248 19:46:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3472318 00:06:23.248 19:46:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3472318 ']' 00:06:23.248 19:46:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3472318 00:06:23.248 19:46:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:23.248 19:46:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:23.248 19:46:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3472318 00:06:23.248 19:46:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:23.249 19:46:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:23.249 19:46:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3472318' 00:06:23.249 killing process with pid 3472318 00:06:23.249 19:46:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3472318 00:06:23.249 19:46:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3472318 00:06:23.508 00:06:23.508 real 0m2.180s 00:06:23.508 user 0m2.434s 00:06:23.508 sys 0m0.588s 00:06:23.508 19:46:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.508 19:46:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.508 ************************************ 00:06:23.508 END TEST locking_app_on_locked_coremask 00:06:23.508 ************************************ 00:06:23.508 19:46:11 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:23.508 19:46:11 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.508 19:46:11 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.508 19:46:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.768 ************************************ 00:06:23.769 START TEST locking_overlapped_coremask 00:06:23.769 ************************************ 00:06:23.769 19:46:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:23.769 19:46:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3472693 00:06:23.769 19:46:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3472693 /var/tmp/spdk.sock 00:06:23.769 19:46:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:23.769 19:46:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 3472693 ']' 00:06:23.769 19:46:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.769 19:46:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:23.769 19:46:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.769 19:46:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:23.769 19:46:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.769 [2024-07-24 19:46:11.532449] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:06:23.769 [2024-07-24 19:46:11.532499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3472693 ] 00:06:23.769 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.769 [2024-07-24 19:46:11.592653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:23.769 [2024-07-24 19:46:11.663491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.769 [2024-07-24 19:46:11.663606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.769 [2024-07-24 19:46:11.663609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.711 19:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.711 19:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:24.711 19:46:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3473017 00:06:24.711 19:46:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3473017 /var/tmp/spdk2.sock 00:06:24.711 19:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:24.711 19:46:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:24.711 19:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3473017 /var/tmp/spdk2.sock 00:06:24.711 19:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:24.711 19:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.711 19:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:24.711 19:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.711 19:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3473017 /var/tmp/spdk2.sock 00:06:24.711 19:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 3473017 ']' 00:06:24.711 19:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.711 19:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.711 19:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.711 19:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.711 19:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.711 [2024-07-24 19:46:12.361772] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:06:24.711 [2024-07-24 19:46:12.361830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3473017 ] 00:06:24.711 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.711 [2024-07-24 19:46:12.432680] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3472693 has claimed it. 00:06:24.711 [2024-07-24 19:46:12.432710] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:25.282 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3473017) - No such process 00:06:25.282 ERROR: process (pid: 3473017) is no longer running 00:06:25.282 19:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:25.282 19:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:25.282 19:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:25.282 19:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:25.282 19:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:25.282 19:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:25.282 19:46:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:25.282 19:46:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:25.282 19:46:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:25.282 19:46:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:25.282 19:46:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3472693 00:06:25.282 19:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 3472693 ']' 00:06:25.282 19:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 3472693 00:06:25.282 19:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:25.282 19:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:25.282 19:46:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3472693 00:06:25.282 19:46:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:25.282 19:46:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:25.282 19:46:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3472693' 00:06:25.282 killing process with pid 3472693 00:06:25.282 19:46:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 3472693 00:06:25.282 19:46:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 3472693 00:06:25.282 00:06:25.282 real 0m1.755s 00:06:25.282 user 0m4.969s 00:06:25.282 sys 0m0.366s 00:06:25.282 19:46:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.282 19:46:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.282 ************************************ 00:06:25.282 END TEST locking_overlapped_coremask 00:06:25.282 ************************************ 00:06:25.553 19:46:13 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:25.553 19:46:13 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.553 19:46:13 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.553 19:46:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.553 ************************************ 00:06:25.553 START TEST locking_overlapped_coremask_via_rpc 00:06:25.553 ************************************ 00:06:25.553 19:46:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:25.553 19:46:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3473069 00:06:25.553 19:46:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3473069 /var/tmp/spdk.sock 00:06:25.553 19:46:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:25.553 19:46:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3473069 ']' 00:06:25.553 19:46:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.553 19:46:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:25.553 19:46:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.553 19:46:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:25.553 19:46:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.553 [2024-07-24 19:46:13.374517] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:06:25.553 [2024-07-24 19:46:13.374572] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3473069 ] 00:06:25.553 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.553 [2024-07-24 19:46:13.438049] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:25.553 [2024-07-24 19:46:13.438081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:25.817 [2024-07-24 19:46:13.514099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.817 [2024-07-24 19:46:13.514233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.817 [2024-07-24 19:46:13.514257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.389 19:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:26.389 19:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:26.389 19:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3473401 00:06:26.390 19:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3473401 /var/tmp/spdk2.sock 00:06:26.390 19:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3473401 ']' 00:06:26.390 19:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:26.390 19:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.390 19:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:26.390 19:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.390 19:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:26.390 19:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.390 [2024-07-24 19:46:14.191089] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:06:26.390 [2024-07-24 19:46:14.191141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3473401 ] 00:06:26.390 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.390 [2024-07-24 19:46:14.269379] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:26.390 [2024-07-24 19:46:14.269399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:26.651 [2024-07-24 19:46:14.375223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:26.651 [2024-07-24 19:46:14.375328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:26.651 [2024-07-24 19:46:14.375326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.223 19:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:27.223 19:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:27.223 19:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:27.223 19:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.223 19:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.223 19:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.223 19:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:27.223 19:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:27.223 19:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:27.223 19:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:27.223 19:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:27.223 19:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:27.223 19:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:27.223 19:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:27.223 19:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.223 19:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.223 [2024-07-24 19:46:14.971258] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3473069 has claimed it. 00:06:27.223 request: 00:06:27.223 { 00:06:27.223 "method": "framework_enable_cpumask_locks", 00:06:27.223 "req_id": 1 00:06:27.223 } 00:06:27.223 Got JSON-RPC error response 00:06:27.223 response: 00:06:27.223 { 00:06:27.223 "code": -32603, 00:06:27.223 "message": "Failed to claim CPU core: 2" 00:06:27.223 } 00:06:27.223 19:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:27.223 19:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:27.223 19:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:27.223 19:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:27.223 19:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:27.223 19:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3473069 /var/tmp/spdk.sock 00:06:27.223 19:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3473069 ']' 00:06:27.223 19:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.223 19:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:27.223 19:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.223 19:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:27.223 19:46:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.223 19:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:27.223 19:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:27.223 19:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3473401 /var/tmp/spdk2.sock 00:06:27.223 19:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3473401 ']' 00:06:27.223 19:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.223 19:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:27.223 19:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.223 19:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:27.223 19:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.527 19:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:27.527 19:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:27.527 19:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:27.527 19:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:27.527 19:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:27.527 19:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:27.527 00:06:27.527 real 0m2.014s 00:06:27.527 user 0m0.783s 00:06:27.527 sys 0m0.150s 00:06:27.527 19:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.527 19:46:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.527 ************************************ 00:06:27.527 END TEST locking_overlapped_coremask_via_rpc 00:06:27.527 ************************************ 00:06:27.527 19:46:15 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:27.527 19:46:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3473069 ]] 00:06:27.527 19:46:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3473069 00:06:27.527 19:46:15 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3473069 ']' 00:06:27.527 19:46:15 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3473069 00:06:27.527 19:46:15 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:27.527 19:46:15 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.527 19:46:15 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3473069 00:06:27.527 19:46:15 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:27.527 19:46:15 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:27.527 19:46:15 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3473069' 00:06:27.527 killing process with pid 3473069 00:06:27.527 19:46:15 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 3473069 00:06:27.527 19:46:15 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 3473069 00:06:27.818 19:46:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3473401 ]] 00:06:27.818 19:46:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3473401 00:06:27.818 19:46:15 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3473401 ']' 00:06:27.818 19:46:15 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3473401 00:06:27.818 19:46:15 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:27.818 19:46:15 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.818 19:46:15 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3473401 00:06:27.818 19:46:15 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:27.818 19:46:15 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:27.818 19:46:15 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3473401' 00:06:27.818 killing process with pid 3473401 00:06:27.818 19:46:15 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 3473401 00:06:27.818 19:46:15 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 3473401 00:06:28.079 19:46:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:28.079 19:46:15 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:28.079 19:46:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3473069 ]] 00:06:28.079 19:46:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3473069 00:06:28.079 19:46:15 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3473069 ']' 00:06:28.079 19:46:15 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3473069 00:06:28.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3473069) - No such process 00:06:28.079 19:46:15 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 3473069 is not found' 00:06:28.079 Process with pid 3473069 is not found 00:06:28.079 19:46:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3473401 ]] 00:06:28.079 19:46:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3473401 00:06:28.079 19:46:15 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3473401 ']' 00:06:28.079 19:46:15 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3473401 00:06:28.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3473401) - No such process 00:06:28.079 19:46:15 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 3473401 is not found' 00:06:28.079 Process with pid 3473401 is not found 00:06:28.079 19:46:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:28.079 00:06:28.079 real 0m15.928s 00:06:28.079 user 0m27.314s 00:06:28.079 sys 0m4.653s 00:06:28.079 19:46:15 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.079 19:46:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.079 ************************************ 00:06:28.079 END TEST cpu_locks 00:06:28.079 ************************************ 00:06:28.079 00:06:28.079 real 0m41.451s 00:06:28.079 user 1m20.368s 00:06:28.079 sys 0m7.759s 00:06:28.079 19:46:15 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.079 19:46:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:28.079 ************************************ 00:06:28.079 END TEST event 00:06:28.079 ************************************ 00:06:28.079 19:46:15 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:28.079 19:46:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:28.079 19:46:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.079 19:46:15 -- common/autotest_common.sh@10 -- # set +x 00:06:28.079 ************************************ 00:06:28.079 START TEST thread 00:06:28.079 ************************************ 00:06:28.079 19:46:15 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:28.340 * Looking for test storage... 00:06:28.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:28.340 19:46:16 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:28.341 19:46:16 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:28.341 19:46:16 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.341 19:46:16 thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.341 ************************************ 00:06:28.341 START TEST thread_poller_perf 00:06:28.341 ************************************ 00:06:28.341 19:46:16 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:28.341 [2024-07-24 19:46:16.144543] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:06:28.341 [2024-07-24 19:46:16.144645] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3473842 ] 00:06:28.341 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.341 [2024-07-24 19:46:16.211844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.341 [2024-07-24 19:46:16.285356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.341 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:29.728 ====================================== 00:06:29.728 busy:2410878486 (cyc) 00:06:29.728 total_run_count: 288000 00:06:29.728 tsc_hz: 2400000000 (cyc) 00:06:29.728 ====================================== 00:06:29.728 poller_cost: 8371 (cyc), 3487 (nsec) 00:06:29.728 00:06:29.728 real 0m1.224s 00:06:29.728 user 0m1.140s 00:06:29.728 sys 0m0.080s 00:06:29.728 19:46:17 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.728 19:46:17 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:29.728 ************************************ 00:06:29.728 END TEST thread_poller_perf 00:06:29.728 ************************************ 00:06:29.728 19:46:17 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:29.728 19:46:17 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:29.728 19:46:17 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.728 19:46:17 thread -- common/autotest_common.sh@10 -- # set +x 00:06:29.728 ************************************ 00:06:29.728 START TEST thread_poller_perf 00:06:29.728 ************************************ 00:06:29.728 19:46:17 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:29.728 [2024-07-24 19:46:17.445343] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:06:29.728 [2024-07-24 19:46:17.445429] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3474193 ] 00:06:29.728 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.728 [2024-07-24 19:46:17.509170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.728 [2024-07-24 19:46:17.579336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.728 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:31.114 ====================================== 00:06:31.114 busy:2402356748 (cyc) 00:06:31.114 total_run_count: 3811000 00:06:31.114 tsc_hz: 2400000000 (cyc) 00:06:31.114 ====================================== 00:06:31.114 poller_cost: 630 (cyc), 262 (nsec) 00:06:31.114 00:06:31.114 real 0m1.209s 00:06:31.114 user 0m1.133s 00:06:31.114 sys 0m0.073s 00:06:31.114 19:46:18 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.114 19:46:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:31.114 ************************************ 00:06:31.114 END TEST thread_poller_perf 00:06:31.114 ************************************ 00:06:31.114 19:46:18 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:31.114 00:06:31.114 real 0m2.686s 00:06:31.114 user 0m2.358s 00:06:31.114 sys 0m0.337s 00:06:31.114 19:46:18 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.114 19:46:18 thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.114 ************************************ 00:06:31.114 END TEST thread 00:06:31.114 ************************************ 00:06:31.114 19:46:18 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:06:31.114 19:46:18 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:31.114 19:46:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:31.114 19:46:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.114 19:46:18 -- common/autotest_common.sh@10 -- # set +x 00:06:31.114 ************************************ 00:06:31.114 START TEST app_cmdline 00:06:31.114 ************************************ 00:06:31.114 19:46:18 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:31.114 * Looking for test storage... 00:06:31.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:31.114 19:46:18 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:31.114 19:46:18 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3474467 00:06:31.114 19:46:18 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3474467 00:06:31.114 19:46:18 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:31.114 19:46:18 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 3474467 ']' 00:06:31.114 19:46:18 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.114 19:46:18 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.114 19:46:18 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.114 19:46:18 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.114 19:46:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:31.114 [2024-07-24 19:46:18.912634] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:06:31.114 [2024-07-24 19:46:18.912711] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3474467 ] 00:06:31.114 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.114 [2024-07-24 19:46:18.976447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.114 [2024-07-24 19:46:19.051173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.055 19:46:19 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.055 19:46:19 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:32.055 19:46:19 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:32.055 { 00:06:32.055 "version": "SPDK v24.09-pre git sha1 19f5787c8", 00:06:32.055 "fields": { 00:06:32.055 "major": 24, 00:06:32.055 "minor": 9, 00:06:32.055 "patch": 0, 00:06:32.055 "suffix": "-pre", 00:06:32.056 "commit": "19f5787c8" 00:06:32.056 } 00:06:32.056 } 00:06:32.056 19:46:19 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:32.056 19:46:19 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:32.056 19:46:19 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:32.056 19:46:19 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:32.056 19:46:19 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:32.056 19:46:19 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.056 19:46:19 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:32.056 19:46:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:32.056 19:46:19 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:32.056 19:46:19 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.056 19:46:19 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:32.056 19:46:19 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:32.056 19:46:19 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:32.056 19:46:19 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:32.056 19:46:19 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:32.056 19:46:19 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:32.056 19:46:19 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.056 19:46:19 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:32.056 19:46:19 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.056 19:46:19 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:32.056 19:46:19 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.056 19:46:19 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:32.056 19:46:19 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:32.056 19:46:19 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:32.316 request: 00:06:32.317 { 00:06:32.317 "method": "env_dpdk_get_mem_stats", 00:06:32.317 "req_id": 1 00:06:32.317 } 00:06:32.317 Got JSON-RPC error response 00:06:32.317 response: 00:06:32.317 { 00:06:32.317 "code": -32601, 00:06:32.317 "message": "Method not found" 00:06:32.317 } 00:06:32.317 19:46:20 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:32.317 19:46:20 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:32.317 19:46:20 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:32.317 19:46:20 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:32.317 19:46:20 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3474467 00:06:32.317 19:46:20 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 3474467 ']' 00:06:32.317 19:46:20 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 3474467 00:06:32.317 19:46:20 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:32.317 19:46:20 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:32.317 19:46:20 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3474467 00:06:32.317 19:46:20 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:32.317 19:46:20 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:32.317 19:46:20 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3474467' 00:06:32.317 killing process with pid 3474467 00:06:32.317 19:46:20 app_cmdline -- common/autotest_common.sh@969 -- # kill 3474467 00:06:32.317 19:46:20 app_cmdline -- common/autotest_common.sh@974 -- # wait 3474467 00:06:32.578 00:06:32.578 real 0m1.562s 00:06:32.578 user 0m1.862s 00:06:32.578 sys 0m0.421s 00:06:32.578 19:46:20 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.578 19:46:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:32.578 ************************************ 00:06:32.578 END TEST app_cmdline 00:06:32.578 ************************************ 00:06:32.578 19:46:20 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:32.578 19:46:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.578 19:46:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.578 19:46:20 -- common/autotest_common.sh@10 -- # set +x 00:06:32.578 ************************************ 00:06:32.578 START TEST version 00:06:32.578 ************************************ 00:06:32.578 19:46:20 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:32.578 * Looking for test storage... 00:06:32.578 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:32.578 19:46:20 version -- app/version.sh@17 -- # get_header_version major 00:06:32.578 19:46:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:32.578 19:46:20 version -- app/version.sh@14 -- # cut -f2 00:06:32.578 19:46:20 version -- app/version.sh@14 -- # tr -d '"' 00:06:32.578 19:46:20 version -- app/version.sh@17 -- # major=24 00:06:32.578 19:46:20 version -- app/version.sh@18 -- # get_header_version minor 00:06:32.578 19:46:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:32.578 19:46:20 version -- app/version.sh@14 -- # cut -f2 00:06:32.578 19:46:20 version -- app/version.sh@14 -- # tr -d '"' 00:06:32.578 19:46:20 version -- app/version.sh@18 -- # minor=9 00:06:32.578 19:46:20 version -- app/version.sh@19 -- # get_header_version patch 00:06:32.578 19:46:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:32.578 19:46:20 version -- app/version.sh@14 -- # cut -f2 00:06:32.578 19:46:20 version -- app/version.sh@14 -- # tr -d '"' 00:06:32.578 19:46:20 version -- app/version.sh@19 -- # patch=0 00:06:32.578 19:46:20 version -- app/version.sh@20 -- # get_header_version suffix 00:06:32.578 19:46:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:32.578 19:46:20 version -- app/version.sh@14 -- # cut -f2 00:06:32.578 19:46:20 version -- app/version.sh@14 -- # tr -d '"' 00:06:32.578 19:46:20 version -- app/version.sh@20 -- # suffix=-pre 00:06:32.578 19:46:20 version -- app/version.sh@22 -- # version=24.9 00:06:32.578 19:46:20 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:32.578 19:46:20 version -- app/version.sh@28 -- # version=24.9rc0 00:06:32.578 19:46:20 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:32.578 19:46:20 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:32.840 19:46:20 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:32.840 19:46:20 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:32.840 00:06:32.840 real 0m0.185s 00:06:32.840 user 0m0.079s 00:06:32.840 sys 0m0.149s 00:06:32.840 19:46:20 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.840 19:46:20 version -- common/autotest_common.sh@10 -- # set +x 00:06:32.840 ************************************ 00:06:32.840 END TEST version 00:06:32.840 ************************************ 00:06:32.840 19:46:20 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:06:32.840 19:46:20 -- spdk/autotest.sh@202 -- # uname -s 00:06:32.840 19:46:20 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:06:32.840 19:46:20 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:06:32.840 19:46:20 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:06:32.840 19:46:20 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:06:32.840 19:46:20 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:06:32.840 19:46:20 -- spdk/autotest.sh@264 -- # timing_exit lib 00:06:32.840 19:46:20 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:32.840 19:46:20 -- common/autotest_common.sh@10 -- # set +x 00:06:32.840 19:46:20 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:06:32.840 19:46:20 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:06:32.840 19:46:20 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:06:32.840 19:46:20 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:06:32.840 19:46:20 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:06:32.840 19:46:20 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:06:32.840 19:46:20 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:32.840 19:46:20 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:32.840 19:46:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.840 19:46:20 -- common/autotest_common.sh@10 -- # set +x 00:06:32.840 ************************************ 00:06:32.840 START TEST nvmf_tcp 00:06:32.840 ************************************ 00:06:32.840 19:46:20 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:33.102 * Looking for test storage... 00:06:33.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:33.102 19:46:20 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:33.102 19:46:20 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:33.102 19:46:20 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:33.102 19:46:20 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:33.102 19:46:20 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.102 19:46:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:33.102 ************************************ 00:06:33.102 START TEST nvmf_target_core 00:06:33.102 ************************************ 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:33.102 * Looking for test storage... 00:06:33.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.102 19:46:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:33.102 ************************************ 00:06:33.102 START TEST nvmf_abort 00:06:33.102 ************************************ 00:06:33.102 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:33.364 * Looking for test storage... 00:06:33.364 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:33.364 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:33.364 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:33.364 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:33.364 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:33.364 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:33.364 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:33.364 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:33.364 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:33.364 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:33.364 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:33.364 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:33.364 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:33.364 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:33.364 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:33.364 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:33.364 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:33.364 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:33.364 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:33.364 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:33.364 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:33.364 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:33.364 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:33.364 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.364 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.364 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.364 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:33.365 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.365 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:06:33.365 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:33.365 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:33.365 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:33.365 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:33.365 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:33.365 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:33.365 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:33.365 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:33.365 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:33.365 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:33.365 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:33.365 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:33.365 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:33.365 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:33.365 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:33.365 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:33.365 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:33.365 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:33.365 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:33.365 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:33.365 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:33.365 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:06:33.365 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:39.952 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:39.952 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:06:39.952 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:39.952 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:39.952 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:39.952 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:39.952 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:39.952 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:06:39.952 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:39.952 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:06:39.952 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:06:39.952 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:06:39.952 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:06:39.952 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:06:39.952 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:06:39.952 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:39.952 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:39.952 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:39.952 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:39.952 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:39.952 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:39.952 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:39.952 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:39.952 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:39.952 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:39.953 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:39.953 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:39.953 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:39.953 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:39.953 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:40.215 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:40.215 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:40.215 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:40.215 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:40.215 19:46:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:40.215 19:46:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:40.215 19:46:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:40.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:40.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.589 ms 00:06:40.215 00:06:40.215 --- 10.0.0.2 ping statistics --- 00:06:40.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:40.215 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:06:40.215 19:46:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:40.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:40.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.351 ms 00:06:40.215 00:06:40.215 --- 10.0.0.1 ping statistics --- 00:06:40.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:40.215 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:06:40.215 19:46:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:40.215 19:46:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:06:40.215 19:46:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:40.215 19:46:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:40.215 19:46:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:40.215 19:46:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:40.215 19:46:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:40.215 19:46:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:40.215 19:46:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:40.477 19:46:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:40.477 19:46:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:40.477 19:46:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:40.477 19:46:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:40.477 19:46:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3478715 00:06:40.477 19:46:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3478715 00:06:40.477 19:46:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:40.477 19:46:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 3478715 ']' 00:06:40.477 19:46:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.477 19:46:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.477 19:46:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.477 19:46:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:40.477 19:46:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:40.477 [2024-07-24 19:46:28.235650] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:06:40.477 [2024-07-24 19:46:28.235714] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:40.477 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.477 [2024-07-24 19:46:28.322550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:40.477 [2024-07-24 19:46:28.417116] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:40.477 [2024-07-24 19:46:28.417175] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:40.477 [2024-07-24 19:46:28.417183] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:40.477 [2024-07-24 19:46:28.417190] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:40.477 [2024-07-24 19:46:28.417197] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:40.477 [2024-07-24 19:46:28.417328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.477 [2024-07-24 19:46:28.417656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:40.477 [2024-07-24 19:46:28.417657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.420 19:46:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:41.420 19:46:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:06:41.420 19:46:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:41.420 19:46:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:41.420 19:46:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:41.420 19:46:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:41.420 19:46:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:41.420 19:46:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.420 19:46:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:41.420 [2024-07-24 19:46:29.067167] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:41.420 19:46:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.420 19:46:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:41.420 19:46:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.420 19:46:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:41.420 Malloc0 00:06:41.420 19:46:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.420 19:46:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:41.420 19:46:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.420 19:46:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:41.420 Delay0 00:06:41.420 19:46:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.420 19:46:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:41.420 19:46:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.420 19:46:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:41.420 19:46:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.420 19:46:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:41.420 19:46:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.420 19:46:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:41.420 19:46:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.420 19:46:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:41.420 19:46:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.420 19:46:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:41.420 [2024-07-24 19:46:29.149369] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:41.420 19:46:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.420 19:46:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:41.420 19:46:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.420 19:46:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:41.420 19:46:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.420 19:46:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:41.420 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.420 [2024-07-24 19:46:29.271058] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:43.967 Initializing NVMe Controllers 00:06:43.967 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:43.967 controller IO queue size 128 less than required 00:06:43.967 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:43.967 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:43.967 Initialization complete. Launching workers. 00:06:43.967 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 122, failed: 28841 00:06:43.967 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28901, failed to submit 62 00:06:43.967 success 28845, unsuccess 56, failed 0 00:06:43.967 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:43.967 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.967 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:43.967 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.967 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:43.967 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:43.967 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:43.967 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:06:43.967 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:43.967 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:06:43.967 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:43.967 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:43.967 rmmod nvme_tcp 00:06:43.967 rmmod nvme_fabrics 00:06:43.967 rmmod nvme_keyring 00:06:43.967 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:43.967 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:06:43.967 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:06:43.967 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3478715 ']' 00:06:43.967 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3478715 00:06:43.967 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 3478715 ']' 00:06:43.967 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 3478715 00:06:43.967 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:06:43.967 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:43.967 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3478715 00:06:43.967 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:43.967 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:43.967 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3478715' 00:06:43.967 killing process with pid 3478715 00:06:43.967 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 3478715 00:06:43.967 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 3478715 00:06:43.967 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:43.967 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:43.967 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:43.967 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:43.967 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:43.967 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:43.967 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:43.967 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.880 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:45.880 00:06:45.880 real 0m12.709s 00:06:45.880 user 0m13.342s 00:06:45.880 sys 0m6.253s 00:06:45.880 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.880 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:45.880 ************************************ 00:06:45.880 END TEST nvmf_abort 00:06:45.880 ************************************ 00:06:45.880 19:46:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:45.880 19:46:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:45.880 19:46:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.880 19:46:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:45.880 ************************************ 00:06:45.880 START TEST nvmf_ns_hotplug_stress 00:06:45.880 ************************************ 00:06:45.880 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:46.141 * Looking for test storage... 00:06:46.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:46.141 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:46.141 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:46.141 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:46.141 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:46.141 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:46.141 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:46.141 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:46.141 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:46.141 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:46.141 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:46.141 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:46.141 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:46.141 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:46.141 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:46.141 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:46.141 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:46.141 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:46.141 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:46.141 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:46.141 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:46.141 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:46.141 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:46.141 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.141 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.141 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.141 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:46.142 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.142 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:06:46.142 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:46.142 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:46.142 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:46.142 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:46.142 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:46.142 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:46.142 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:46.142 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:46.142 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:46.142 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:46.142 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:46.142 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:46.142 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:46.142 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:46.142 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:46.142 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:46.142 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:46.142 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:46.142 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:46.142 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:46.142 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:06:46.142 19:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:52.770 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:52.770 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:52.770 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:52.770 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:52.770 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:53.032 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:53.032 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:53.032 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:53.032 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:53.032 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:53.032 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:53.032 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:53.294 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:53.294 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:53.294 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:53.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:53.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.530 ms 00:06:53.294 00:06:53.294 --- 10.0.0.2 ping statistics --- 00:06:53.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.294 rtt min/avg/max/mdev = 0.530/0.530/0.530/0.000 ms 00:06:53.294 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:53.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:53.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:06:53.294 00:06:53.294 --- 10.0.0.1 ping statistics --- 00:06:53.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.294 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:06:53.294 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:53.294 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:06:53.294 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:53.294 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:53.294 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:53.294 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:53.294 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:53.294 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:53.294 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:53.294 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:53.294 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:53.294 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:53.294 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:53.294 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3483543 00:06:53.294 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3483543 00:06:53.294 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:53.294 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 3483543 ']' 00:06:53.294 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.294 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.294 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.294 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.294 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:53.294 [2024-07-24 19:46:41.156939] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:06:53.294 [2024-07-24 19:46:41.157005] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:53.294 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.294 [2024-07-24 19:46:41.246596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:53.555 [2024-07-24 19:46:41.340077] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:53.555 [2024-07-24 19:46:41.340147] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:53.555 [2024-07-24 19:46:41.340155] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:53.555 [2024-07-24 19:46:41.340162] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:53.555 [2024-07-24 19:46:41.340168] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:53.555 [2024-07-24 19:46:41.340371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.555 [2024-07-24 19:46:41.340638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:53.555 [2024-07-24 19:46:41.340640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.127 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:54.127 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:06:54.127 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:54.127 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:54.127 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:54.127 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:54.127 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:54.127 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:54.388 [2024-07-24 19:46:42.118710] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:54.388 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:54.388 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:54.650 [2024-07-24 19:46:42.460164] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:54.650 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:54.911 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:54.911 Malloc0 00:06:54.911 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:55.171 Delay0 00:06:55.171 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.431 19:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:55.431 NULL1 00:06:55.431 19:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:55.691 19:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:55.691 19:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3484117 00:06:55.691 19:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:06:55.691 19:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.691 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.632 Read completed with error (sct=0, sc=11) 00:06:56.893 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.893 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.893 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.893 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.893 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.893 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.893 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.893 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:56.893 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:57.153 true 00:06:57.153 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:06:57.153 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.095 19:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.095 19:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:58.095 19:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:58.355 true 00:06:58.355 19:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:06:58.355 19:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.355 19:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.616 19:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:58.616 19:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:58.877 true 00:06:58.877 19:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:06:58.877 19:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.877 19:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.138 19:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:59.138 19:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:59.138 true 00:06:59.398 19:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:06:59.398 19:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.398 19:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.659 19:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:59.659 19:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:59.659 true 00:06:59.659 19:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:06:59.659 19:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.920 19:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.181 19:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:00.181 19:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:00.181 true 00:07:00.181 19:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:00.181 19:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.441 19:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.701 19:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:00.701 19:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:00.701 true 00:07:00.701 19:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:00.701 19:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.960 19:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.221 19:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:01.221 19:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:01.221 true 00:07:01.221 19:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:01.221 19:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.482 19:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.744 19:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:01.744 19:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:01.744 true 00:07:01.744 19:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:01.744 19:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.005 19:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.005 19:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:02.005 19:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:02.265 true 00:07:02.265 19:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:02.265 19:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.526 19:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.526 19:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:02.526 19:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:02.786 true 00:07:02.786 19:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:02.786 19:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.047 19:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.047 19:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:03.047 19:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:03.308 true 00:07:03.308 19:46:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:03.308 19:46:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.569 19:46:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.569 19:46:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:03.569 19:46:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:03.830 true 00:07:03.830 19:46:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:03.830 19:46:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.091 19:46:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.091 19:46:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:04.091 19:46:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:04.351 true 00:07:04.351 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:04.351 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.612 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.612 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:04.612 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:04.873 true 00:07:04.873 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:04.873 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.873 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.134 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:05.134 19:46:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:05.395 true 00:07:05.395 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:05.395 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.395 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.656 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:05.656 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:05.916 true 00:07:05.916 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:05.916 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.916 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.177 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:06.177 19:46:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:06.474 true 00:07:06.474 19:46:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:06.474 19:46:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.474 19:46:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.739 19:46:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:06.739 19:46:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:06.739 true 00:07:06.739 19:46:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:06.739 19:46:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.998 19:46:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.258 19:46:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:07.258 19:46:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:07.258 true 00:07:07.258 19:46:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:07.258 19:46:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.518 19:46:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.778 19:46:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:07.778 19:46:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:07.778 true 00:07:07.778 19:46:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:07.778 19:46:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.037 19:46:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.297 19:46:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:08.297 19:46:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:08.297 true 00:07:08.297 19:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:08.297 19:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.238 19:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.238 19:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:09.238 19:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:09.500 true 00:07:09.500 19:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:09.500 19:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.444 19:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.444 19:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:10.444 19:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:10.705 true 00:07:10.705 19:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:10.705 19:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.967 19:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.967 19:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:10.967 19:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:11.228 true 00:07:11.228 19:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:11.228 19:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.490 19:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.490 19:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:11.490 19:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:11.751 true 00:07:11.751 19:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:11.751 19:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.751 19:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.012 19:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:12.012 19:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:12.273 true 00:07:12.273 19:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:12.273 19:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.659 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.659 19:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.659 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.659 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.659 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.659 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.659 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.659 19:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:13.659 19:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:13.659 true 00:07:13.659 19:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:13.659 19:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.603 19:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.864 19:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:14.864 19:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:14.864 true 00:07:14.864 19:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:14.864 19:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.125 19:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.387 19:47:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:15.387 19:47:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:15.387 true 00:07:15.387 19:47:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:15.387 19:47:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.649 19:47:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.910 19:47:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:07:15.910 19:47:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:07:15.910 true 00:07:15.910 19:47:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:15.910 19:47:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.170 19:47:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.170 19:47:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:07:16.170 19:47:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:07:16.431 true 00:07:16.431 19:47:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:16.431 19:47:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.691 19:47:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.691 19:47:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:07:16.691 19:47:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:07:16.952 true 00:07:16.952 19:47:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:16.952 19:47:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.212 19:47:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.212 19:47:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:07:17.212 19:47:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:07:17.473 true 00:07:17.473 19:47:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:17.473 19:47:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.734 19:47:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.734 19:47:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:07:17.734 19:47:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:07:17.996 true 00:07:17.996 19:47:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:17.996 19:47:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.257 19:47:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.257 19:47:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:07:18.257 19:47:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:07:18.518 true 00:07:18.518 19:47:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:18.519 19:47:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.780 19:47:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.780 19:47:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:07:18.780 19:47:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:07:19.041 true 00:07:19.041 19:47:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:19.041 19:47:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.041 19:47:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.299 19:47:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:07:19.299 19:47:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:07:19.559 true 00:07:19.559 19:47:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:19.559 19:47:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.559 19:47:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.820 19:47:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:07:19.820 19:47:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:07:20.080 true 00:07:20.080 19:47:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:20.080 19:47:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.080 19:47:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.342 19:47:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:07:20.342 19:47:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:07:20.603 true 00:07:20.603 19:47:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:20.603 19:47:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.603 19:47:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.864 19:47:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:07:20.864 19:47:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:07:20.864 true 00:07:21.145 19:47:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:21.145 19:47:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.145 19:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.449 19:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:07:21.449 19:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:07:21.449 true 00:07:21.449 19:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:21.449 19:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.709 19:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.969 19:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:07:21.969 19:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:07:21.969 true 00:07:21.969 19:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:21.969 19:47:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.912 19:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.172 19:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:07:23.172 19:47:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:07:23.172 true 00:07:23.172 19:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:23.172 19:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.433 19:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.693 19:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:07:23.693 19:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:07:23.693 true 00:07:23.693 19:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:23.693 19:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.956 19:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.956 [2024-07-24 19:47:11.890479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.890544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.890572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.890598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.890628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.890653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.890680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.890712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.890741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.890769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.890797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.890824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.890852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.890878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.890916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.890940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.890966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.890995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.891022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.891048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.891076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.891102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.891130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.891157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.891190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.891224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.891257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.891295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.891325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.891351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.891382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.891415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.891445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.891476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.891501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.891528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.891555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.891581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.891603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.891626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.891648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.891674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.891703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.891733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.891761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.891790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.891821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.891849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.891875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.891913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.891943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.891974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.892000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.892029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.892057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.892085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.892114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.892142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.892172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.892206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.892240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.892270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.892299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.892327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.892739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.892779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.892810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.892840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.892869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.892899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.892927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.892955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.892982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.893010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.893034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.893058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.893089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.893117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.893145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.893175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.893207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.893236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.893267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.893296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.893323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.893350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.893376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.893403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.893432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.893462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.893489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.893515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.893543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.893572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.893601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.956 [2024-07-24 19:47:11.893638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.893666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.893695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.893724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.893754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.893783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.893813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.893840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.893870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.893905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.893948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.893982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.894014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.894049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.894084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.894119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.894143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.894172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.894204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.894233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.894264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.894293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.894324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.894351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.894380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.894410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.894436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.894465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.894494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.894522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.894550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.894577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.894931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.894962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.894990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.895017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.895052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.895082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.895116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.895145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.895176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.895214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.895244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.895269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.895297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.895330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.895356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.895384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.895410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.895432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.895457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.895483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.895511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.895539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.895583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.895610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.895641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.895673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.895700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.895729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.895757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.895785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.895812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.895839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.895868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.895899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.895927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.895957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.895982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.896012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.896038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.896067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.896093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.896122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.896147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.896177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.896204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.896234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.896261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.896289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.896316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.896342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.896369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.896397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.896419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.896446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.896471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.896497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.896526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.896556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.896584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.896617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.896650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.896678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.896707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.896736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.897090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.897123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.897153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.897180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.897213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.897241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.897273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.897302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.897334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.897365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.897398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.897426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.897453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.897477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.897506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.897536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.897566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.897591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.897621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.897649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.897682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.897713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.897743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.897771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.897802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.897828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.897855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.897883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.897910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.897935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.897962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.897989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.898014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.898046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.898078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.898107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.957 [2024-07-24 19:47:11.898137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.898165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.898195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.898226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.898256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.898283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.898311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.898339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.898361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.898389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.898418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.898442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.898464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.898493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.898521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.898552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.898575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.898597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.898621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.898643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.898665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.898687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.898710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.898732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.898755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.898777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.898800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.899123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.899151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.899179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.899211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.899241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.899269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.899297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.899325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.899352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.899380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.899407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.899435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.899463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.899491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.899522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.899550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.899578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.899607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.899633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.899684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.899712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.899763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.899790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.899821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.899850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.899893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.899922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.899954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.899982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.900010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.900038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.900067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.900093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.900128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.900155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.900204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.900235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.900264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.900293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.900320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.900348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.900386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.900424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.900450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.900478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.900511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.900548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.900577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.900602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.900628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.900650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.900677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.900703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.900739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.900769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.900796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.900825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.900851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.900880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.900913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.900942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.900969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.900995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.901025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.901368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.901399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.901429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.901459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.901490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.901519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.901550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.901577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.901605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.901632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.901660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.901687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.901715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.901743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.901772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.901801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.901828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.901858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.901888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.901917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.901947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.901978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.902006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.902035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.902064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.902090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.902117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.902144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.902171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.902198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.902227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.902256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.902283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.902310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.902339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.902368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.902395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.902422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.902460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.902485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.902513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.902543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.958 [2024-07-24 19:47:11.902569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.902592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.902620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.902652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.902680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.902706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.902734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.902760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.902789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.902812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.902840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.902868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.902900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.902929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.902955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.902982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.903012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.903038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.903066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.903094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.903121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.903630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.903663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.903694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.903726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.903753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.903784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.903814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.903846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.903876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.903904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.903931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.903960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.903989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.904015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.904043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.904072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.904098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.904122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.904149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.904178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.904212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.904241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.904269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.904294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.904321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.904346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.904373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.904397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.904427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.904456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.904485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.904511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.904534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.904557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.904579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.904602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.904625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.904647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.904670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.904693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.904717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.904740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.904762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.904785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.904807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.904830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.904860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.904888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.904917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.904945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.904976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.905003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.905031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.905062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.905088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.905116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.905143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.905171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.905205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.905235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.905261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.905294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.905324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.905367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.905725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.905755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.905784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.905814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.905838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.905867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.905899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.905928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.905958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.905986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.906014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.906046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.906075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.906105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.906134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.906163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.906193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.906226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.906257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.906285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.906317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.906345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.906376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.906403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.906430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.906457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.906484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.959 [2024-07-24 19:47:11.906513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.960 [2024-07-24 19:47:11.906541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.960 [2024-07-24 19:47:11.906571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.960 [2024-07-24 19:47:11.906609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.960 [2024-07-24 19:47:11.906640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.960 [2024-07-24 19:47:11.906674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.960 [2024-07-24 19:47:11.906703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.960 [2024-07-24 19:47:11.906742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.960 [2024-07-24 19:47:11.906768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.960 [2024-07-24 19:47:11.906804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.960 [2024-07-24 19:47:11.906832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.960 [2024-07-24 19:47:11.906862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.960 [2024-07-24 19:47:11.906891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.960 [2024-07-24 19:47:11.906919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.960 [2024-07-24 19:47:11.906945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.960 [2024-07-24 19:47:11.906974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.960 [2024-07-24 19:47:11.907001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.960 [2024-07-24 19:47:11.907027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.960 [2024-07-24 19:47:11.907053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.960 [2024-07-24 19:47:11.907081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.960 [2024-07-24 19:47:11.907111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.960 [2024-07-24 19:47:11.907139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.960 [2024-07-24 19:47:11.907167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.960 [2024-07-24 19:47:11.907193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.960 [2024-07-24 19:47:11.907223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.960 [2024-07-24 19:47:11.907253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:23.960 [2024-07-24 19:47:11.907280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.907308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.907337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.907364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.907404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.907432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.907461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.907493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.907524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.907558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.907910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.907941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.907970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.907999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.908026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.908066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.908095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.908124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.908153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.908184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.908215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.908244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.908273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.908298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.908327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.908354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.908383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.908409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.908437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.908465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.908493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.908525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.908552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.908583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.908611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.908651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.908677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.908721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.908750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.908794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.908823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.908853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.908883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.908911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.908939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.908966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.908994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.909020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.909047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.909074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.909100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.909126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.909153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.909178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.909210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.256 [2024-07-24 19:47:11.909239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.909266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.909294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.909321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.909353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.909386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.909417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.909450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.909477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.909506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.909537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.909565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.909590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.909619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.909648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.909676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.909701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.909730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.909755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.910133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.910162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.910188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.910217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.910245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.910286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.910317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.910347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.910376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.910415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.910444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.910493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.910522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.910553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.910578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.910605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.910633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.910659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.910686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.910710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.910735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.910761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.910788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.910816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.910845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.910877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.910905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.910932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.910960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.910990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.911021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.911049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.911074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.911100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.911123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.911146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.911170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.911195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.911221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.911245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.911273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.911304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.911330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.911357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.911390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.911420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.911445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.911468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.911491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.911513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.911536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.911558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.911581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.911609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.911637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.911665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.911694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.911721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.911744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.911766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.911789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.911813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.911842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.912126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.912150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.912176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.912210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.912240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.912266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.912293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:24.257 [2024-07-24 19:47:11.912322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.912352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.912378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.912405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.257 [2024-07-24 19:47:11.912433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.912465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.912490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.912517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.912546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.912587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.912651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.912680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.912705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.912734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.912761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.912790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.912817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.912846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.912874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.912908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.912937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.912965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.912991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.913019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.913047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.913077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.913113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.913147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.913183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.913226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.913254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.913278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.913310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.913338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.913366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.913394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.913421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.913450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.913481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.913504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.913540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.913574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.913601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.913630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.913657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.913687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.913716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.913745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.913772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.913801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.913831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.913856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.913885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.913912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.913939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.913967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.913999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.914391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.914428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.914463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.914499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.914527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.914559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.914588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.914617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.914643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.914668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.914694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.914724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.914750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.914780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.914811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.914839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.914869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.915109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.915139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.915168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.915196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.258 [2024-07-24 19:47:11.915227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.915256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.915286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.915315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.915344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.915386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.915414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.915463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.915491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.915519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.915546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.915579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.915605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.915635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.915664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.915691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.915719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.915777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.915807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.915842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.915870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.915904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.915931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.915957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.915987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.916015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.916038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.916068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.916097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.916126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.916153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.916179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.916211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.916240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.916266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.916293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.916320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.916348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.916376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.916402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.916431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.916460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.916487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.916518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.916545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.916575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.916601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.916625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.916652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.916685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.916714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.916742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.916772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.916802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.916829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.916855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.916891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.916921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.916951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.916980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.917110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.917153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.917179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.917210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.917237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.917263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.917289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.917317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.917344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.917367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.917396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.917428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.917455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.917484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.917514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.917544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.917671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.917718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.917744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.917775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.917802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.917852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.917879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.917928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.917956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.918006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.918032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.918059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.918087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.918114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.918142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.918170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.918198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.918226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.918257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.918285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.259 [2024-07-24 19:47:11.918311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.918343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.918372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.918399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.918429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.918458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.918489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.918519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.918552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.918578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.918607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.918636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.918666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.918696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.918726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.918754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.918782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.918809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.918841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.918869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.918897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.918932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.918964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.919004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.919039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.919079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.919112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.919521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.919556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.919585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.919615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.919644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.919693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.919722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.919753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.919783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.919811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.919841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.919872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.919901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.919930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.919957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.919991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.920026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.920094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.920122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.920158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.920185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.920217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.920245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.920268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.920294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.920326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.920352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.920382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.920414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.920443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.920472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.920502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.920532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.920562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.920591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.920621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.920649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.920680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.920707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.920736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.920766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.920794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.920823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.920852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.920879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 19:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:07:24.260 [2024-07-24 19:47:11.920907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.920942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.920972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.921007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.921036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.921062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.921090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.921119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.921147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.921180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.921214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.921240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.921263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 19:47:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:07:24.260 [2024-07-24 19:47:11.921291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.921320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.921356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.921392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.921421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.921452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.921486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.260 [2024-07-24 19:47:11.921512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.921543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.921571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.921599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.921627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.921656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.921692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.921720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.921755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.921780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.921809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.921838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.921875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.921901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.921933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.922286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.922312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.922344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.922375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.922406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.922436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.922462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.922487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.922516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.922549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.922579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.922608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.922636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.922666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.922693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.922718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.922750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.922779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.922814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.922846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.922875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.922904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.922931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.922961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.922989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.923021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.923051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.923092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.923119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.923147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.923174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.923207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.923236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.923281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.923310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.923339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.923368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.923394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.923422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.923450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.923477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.923506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.923541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.923571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.923598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.923635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.923668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.923704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.923738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.923765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.923798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.923825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.923852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.923888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.923921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.923949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.923978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.924005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.924030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.924058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.924090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.924121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.261 [2024-07-24 19:47:11.924147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.924175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.924541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.924569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.924599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.924627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.924655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.924685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.924711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.924747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.924777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.924811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.924841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.924870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.924899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.924927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.924955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.924983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.925011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.925038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.925068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.925095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.925122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.925152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.925177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.925209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.925240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.925270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.925302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.925333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.925366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.925403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.925438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.925465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.925494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.925528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.925557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.925587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.925614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.925644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.925674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.925702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.925730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.925760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.925788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.925817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.925845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.925873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.925902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.925929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.925959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.925985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.926011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.926043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.926073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.926102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.926132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.926158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.926185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.926217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.926245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.926276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.926300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.926330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.926361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.926758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.926786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.926835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.926862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.926896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.926924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.926953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.926988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.927016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.927045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.927074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.927111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.927141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.927173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.927208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.927235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.927265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.927292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.927325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.927360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.927388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.927415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.927442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.927474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.927505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.927532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.927559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.927585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.927612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.927637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.927665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.262 [2024-07-24 19:47:11.927692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.927723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.927756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.927785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.927813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.927842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.927870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.927899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.927928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.927957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.927993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.928022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.928052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.928082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.928111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.928138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.928164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.928203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.928231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.928258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.928286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.928312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.928340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.928369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.928397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.928426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.928457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.928487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.928516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.928545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.928572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.928597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.928624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.928985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.929014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.929047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.929073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.929098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.929124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.929151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.929178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.929211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.929239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.929268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.929294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.929320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.929351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.929379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.929406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.929434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.929462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.929489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.929518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.929550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.929578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.929607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.929639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.929672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.929704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.929734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.929766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.929802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.929832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.929860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.929889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.929916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.929943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.929974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.930004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.930035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.930065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.930093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.930122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.930152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.930184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.930216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.930247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.930276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.930308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.930337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.930374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.930403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.930431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.930460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.930487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.930526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.930556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.930582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.930608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.930636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.930667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.930706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.930732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.930761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.930791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.930825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.931179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.263 [2024-07-24 19:47:11.931215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.931245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.931274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.931304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.931333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.931361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.931389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.931418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.931449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.931511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.931537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.931568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.931597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.931625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.931656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.931686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.931713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.931742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.931770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.931798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.931829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.931859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.931887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.931916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.931948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.931978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.932014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.932045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.932073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.932100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.932139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.932164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.932195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.932230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.932256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.932283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.932318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.932351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.932380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.932406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.932436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.932466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.932496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.932524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.932551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.932582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.932612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.932646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.932676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.932707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.932738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.932767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.932797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.932827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.932862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.932892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.932921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.932949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.932978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.933005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.933031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.933064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.933096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.933495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.933526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.933556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.933584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.933618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.933649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.933677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.933708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.933737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.933770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.933801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.933833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.933863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.933888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.933920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.933948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.933978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.934005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.934032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.934056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.934087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.934119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.934148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.934180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.934213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.934242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.934270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.934301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.934330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.934362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.264 [2024-07-24 19:47:11.934390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.934418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.934449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.934479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.934510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.934537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.934564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.934597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.934628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.934659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.934689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.934717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.934747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.934774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.934802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.934832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.934859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.934886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.934917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.934948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.934977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.935007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.935037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.935065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.935098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.935126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.935159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.935187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.935225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.935253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.935290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.935319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.935350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.935704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.935735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.935763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.935793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.935824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.935859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.935890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.935917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.935946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.935983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.936012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.936041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.936070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.936093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.936123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.936154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.936180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.936213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.936244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.936275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.936311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.936338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.936381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.936412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.936441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.936471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.936500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.936540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.936571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.936598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.936632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.936663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.936698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.936727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.936755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.936787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.936820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.936853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.936884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.936914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.936943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.936972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.937002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.937033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.937062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.937090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.937122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.937150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.937205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.937233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.937262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.937290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.937321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.937378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.265 [2024-07-24 19:47:11.937411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.937441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.937471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.937497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.937527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.937556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.937588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.937620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.937649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.937678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.938014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.938053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.938077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.938108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.938136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.938164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.938194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.938229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.938260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.938291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.938322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.938352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.938382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.938410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.938438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.938467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.938498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.938532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.938560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.938599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.938628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.938684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.938713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.938744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.938772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.938802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.938829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.938860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.938909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.938939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.938969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.939000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.939031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.939058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.939089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.939118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.939146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.939180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.939210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.939249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.939272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.939304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.939335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.939363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.939398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.939430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.939459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.939489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.939518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.939545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.939576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.939606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.939634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.939659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.939691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.939720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.939748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.939776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.939807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.939836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.939860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.939889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.939922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.940291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.940324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.940355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.940386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.940421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.266 [2024-07-24 19:47:11.940449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.940485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.940522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.940551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.940580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.940606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.940634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.940665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.940694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.940724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.940753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.940782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.940811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.940840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.940869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.940903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.940934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.940964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.940990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.941018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.941048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.941078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.941109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.941136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.941165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.941199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.941230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.941261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.941314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.941343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.941379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.941409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.941440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.941467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.941495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.941522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.941550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.941579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.941606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.941634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.941666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.941701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.941730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.941760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.941789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.941819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.941850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.941880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.941911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.941939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.941969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.941998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.942030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.942062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.942094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.942123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.942152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.942182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.942233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.942584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.942614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.942653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.942682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.942710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.942741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.942774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.942804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.942833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.942867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.942900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.942930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.942970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.942997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.943029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.943058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.943087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.943115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.943143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.943173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.943204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.943234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.943266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.943294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.943320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.943347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.943374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.943403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.943436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.943463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.943493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.943531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.943557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.943586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.943619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.267 [2024-07-24 19:47:11.943651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.943684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.943714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.943746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.943777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.943805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.943828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.943852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.943875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.943900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.943924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.943949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.943977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.944006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.944038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.944067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.944097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.944123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.944147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.944170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.944195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.944224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.944252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.944279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.944311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.944339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.944368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.944400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.944759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.944791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.944822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.944850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.944879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.944910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.944939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.944970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.944999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.945033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.945063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.945091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.945119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.945147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.945176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.945209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.945243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.945307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.945335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.945364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.945393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.945430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.945465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.945494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.945521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.945550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.945581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.945609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.945637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.945674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.945704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.945732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.945758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.945786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.945815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.945845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.945876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.945906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.945931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.945967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.945997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.946026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.946062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.946092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.946123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.946151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.946180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.946220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.946249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.946279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.946316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.946345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.946401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.946431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.946463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.946492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.946523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.946554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.268 [2024-07-24 19:47:11.946582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.946612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.946638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.946669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.946701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.946729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.947105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:24.269 [2024-07-24 19:47:11.947138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.947169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.947212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.947245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.947274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.947307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.947336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.947370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.947407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.947434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.947466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.947497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.947526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.947557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.947586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.947615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.947646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.947676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.947707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.947735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.947765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.947795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.947829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.947867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.947894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.947924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.947953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.947981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.948005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.948037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.948070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.948098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.948126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.948157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.948195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.948228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.948257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.948291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.948320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.948348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.948377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.948405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.948432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.948460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.948497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.948528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.948556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.948587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.948619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.948647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.948674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.948704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.948735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.948763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.948788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.948813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.948843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.948874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.948904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.948935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.948965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.948992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.949022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.949177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.949212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.949239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.949277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.949304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.949340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.949370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.949399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.949457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.949490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.949522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.949556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.949585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.949616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.949646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.949674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.949882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.949915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.949944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.949982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.269 [2024-07-24 19:47:11.950010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.950037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.950067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.950091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.950122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.950151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.950181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.950212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.950242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.950273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.950302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.950331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.950360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.950389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.950417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.950470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.950499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.950528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.950557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.950585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.950616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.950643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.950673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.950714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.950743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.950773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.950799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.950823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.950855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.950884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.950917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.950946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.950973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.951002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.951030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.951058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.951108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.951137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.951166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.951193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.951225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.951261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.951292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.951677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.951709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.951740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.951770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.951797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.951833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.951870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.951898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.951925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.951954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.951979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.952008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.952041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.952069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.952097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.952127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.952155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.952181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.952216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.952242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.952275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.952303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.952332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.952359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.952387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.952422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.952449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.952492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.952521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.952555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.952585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.952614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.952643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.952672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.952704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.952731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.952770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.952800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.952831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.270 [2024-07-24 19:47:11.952858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.952887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.952921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.952950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.953000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.953033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.953064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.953092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.953120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.953148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.953182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.953216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.953601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.953631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.953660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.953690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.953720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.953750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.953788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.953820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.953848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.953879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.953910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.953949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.953978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.954006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.954041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.954069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.954104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.954133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.954166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.954197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.954229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.954270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.954298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.954328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.954358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.954385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.954414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.954447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.954478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.954511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.954541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.954570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.954604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.954644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.954674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.954703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.954728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.954759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.954794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.954823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.954853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.954879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.954909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.954938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.954965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.955004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.955032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.955060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.955090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.955120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.955148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.955178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.955212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.955243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.955280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.955309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.955340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.955369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.955397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.955426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.271 [2024-07-24 19:47:11.955455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.955489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.955518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.955548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.955671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.955700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.955725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.955758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.955789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.955816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.955846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.955879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.955909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.955940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.955970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.955998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.956256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.956287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.956317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.956353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.956384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.956413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.956449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.956477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.956529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.956559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.956590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.956619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.956648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.956683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.956712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.956743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.956775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.956806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.956836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.956863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.956890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.956928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.956955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.956983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.957017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.957046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.957075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.957102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.957134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.957163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.957192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.957225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.957252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.957276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.957306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.957329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.957352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.957377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.957401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.957433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.957463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.957494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.957523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.957554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.957582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.957610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.957638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.957665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.957696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.957731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.957762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.958144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.958174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.958207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.958239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.958274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.958307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.958334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.958365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.958393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.958421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.958451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.958480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.958516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.958554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.958579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.958611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.958641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.958671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.272 [2024-07-24 19:47:11.958701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.958730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.958761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.958793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.958824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.958854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.958886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.958913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.958945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.958974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.959004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.959037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.959069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.959099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.959132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.959161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.959194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.959227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.959257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.959285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.959314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.959343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.959372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.959399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.959431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.959460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.959487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.959519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.959550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.959599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.959629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.959660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.959692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.959718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.959749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.959778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.959806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.959834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.959862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.959895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.959924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.959953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.959980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.960007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.960043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.960073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.960242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.960270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.960297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.960327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.960355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.960385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.960414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.960444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.960474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.960505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.960536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.960567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.960804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.960834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.960858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.960890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.960921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.960951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.960981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.961010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.961040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.961070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.961100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.961131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.961160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.961189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.961226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.961255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.961289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.961317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.961345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.961375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.961405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.961440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.961471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.961498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.961528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.961556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.961585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.961616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.961645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.961674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.961701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.961732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.961762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.961792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.273 [2024-07-24 19:47:11.961824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.961852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.961883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.961924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.961953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.961981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.962010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.962033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.962059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.962085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.962109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.962137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.962163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.962192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.962233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.962262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.962292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.962666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.962697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.962731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.962761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.962793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.962824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.962854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.962883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.962913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.962945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.962974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.963003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.963043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.963074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.963105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.963133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.963160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.963192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.963227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.963259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.963303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.963331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.963362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.963393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.963422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.963451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.963480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.963508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.963539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.963567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.963598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.963629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.963666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.963692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.963726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.963758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.963786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.963812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.963836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.963864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.963892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.963931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.963960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.963984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.964017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.964044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.964074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.964105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.964135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.964167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.964197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.964230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.964260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.964293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.964322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.964354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.964382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.964409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.964442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.964470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.964502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.964530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.964556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.964584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.964713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.964741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.964770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.964799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.964833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.964864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.964894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.964924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.964951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.964979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.965007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.965035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.965277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.965307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.965335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.965367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.274 [2024-07-24 19:47:11.965406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.965434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.965462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.965490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.965517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.965545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.965570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.965598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.965627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.965656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.965687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.965717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.965748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.965778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.965808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.965841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.965873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.965905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.965933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.965961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.965991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.966022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.966051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.966087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.966117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.966149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.966177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.966212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.966244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.966272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.966299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.966329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.966359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.966387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.966416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.966455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.966483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.966512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.966542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.966570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.966602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.966631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.966657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.966686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.966719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.966749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.966778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.966810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.967179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.967213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.967242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.967271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.967308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.967339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.967365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.967398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.967426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.967456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.967484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.967512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.967539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.967567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.967597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.967628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.967657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.967685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.967720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.967747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.967775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.967810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.967840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.967874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.967911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.967940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.967987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.968016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.968044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.968071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.968100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.968132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.968161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.968203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.968234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.968261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.968292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.275 [2024-07-24 19:47:11.968319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.968349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.968378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.968408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.968435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.968468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.968496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.968522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.968555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.968594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.968618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.968646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.968681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.968711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.968738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.968768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.968796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.968824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.968852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.968884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.968917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.968946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.968978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.969005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.969038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.969069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.969098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.969239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.969269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.969301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.969345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.969375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.969405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.969436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.969468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.969501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.969530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.969559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.969802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.969832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.969860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.969888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.969915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.969952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.969983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.970012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.970036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.970068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.970103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.970130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.970159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.970189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.970223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.970254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.970281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.970310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.970340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.970368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.970396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.970430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.970459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.970488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.970516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.970546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.970573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.970611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.970641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.970671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.970702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.970732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.970768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.970797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.970825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.970858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.970887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.970915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.970944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.970976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.971005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.971034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.971067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.971095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.971121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.971149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.971175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.971214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.971243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.971267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.971298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.971331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.971742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.971771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.971800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.971830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.971859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.276 [2024-07-24 19:47:11.971891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.971918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.971947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.971977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.972011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.972041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.972071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.972100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.972130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.972160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.972212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.972241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.972277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.972312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.972342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.972370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.972395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.972426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.972457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.972489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.972518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.972545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.972571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.972599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.972628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.972657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.972685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.972714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.972743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.972775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.972803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.972859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.972889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.972919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.972946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.972975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.973007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.973039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.973069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.973099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.973128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.973156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.973184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.973217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.973253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.973289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.973318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.973344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.973378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.973405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.973434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.973464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.973499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.973529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.973560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.973590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.973614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.973643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.973672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.973801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.973840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.973870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.973903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.973930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.973957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.973989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.974018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.974065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.974096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.974126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.974365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.974395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.974425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.974454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.974493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.974524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.974551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.974584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.974612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.974646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.974676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.974701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.974734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.974767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.974794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.974822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.974851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.974876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.974901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.974932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.974963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.974994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.975022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.975050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.975080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.975110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.277 [2024-07-24 19:47:11.975137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.975164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.975198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.975234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.975259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.975290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.975316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.975346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.975378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.975406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.975437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.975466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.975516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.975547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.975575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.975613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.975643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.975675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.975705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.975737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.975767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.975796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.975828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.975858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.975889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.975918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.976278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.976309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.976338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.976366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.976400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.976429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.976456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.976483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.976514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.976548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.976581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.976609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.976641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.976668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.976697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.976726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.976759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.976788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.976816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.976851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.976880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.976911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.976941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.976969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.976998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.977027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.977057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.977088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.977117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.977152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.977182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.977214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.977251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.977280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.977305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.977339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.977371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.977399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.977428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.977458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.977485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.977511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.977542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.977574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.977604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.977633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.977664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.977692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.977722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.977750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.977780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.977822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.977852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.977882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.977908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.977939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.977967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.977996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.978023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.978054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.978092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.978120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.978149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.978175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.978344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.978374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.978403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.978431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.978459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.978494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.978522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.278 [2024-07-24 19:47:11.978549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.978577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.978608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.978640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.978887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.978919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.978954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.978983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.979012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.979042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.979074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.979102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.979147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.979175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.979208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.979243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.979271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.979304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.979334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.979365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.979395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.979426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.979454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.979487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.979520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.979548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.979577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.979605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.979639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.979672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.979703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.979735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.979765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.979795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.979824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.979855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.979886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.979922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.979953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.979980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.980008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.980039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.980072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.980104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.980133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.980164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.980192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.980226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.980252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.980281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.980312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.980344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.980373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.980402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.980433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.980462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.980825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.980855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.980883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.980911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.980938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.980971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.981002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.981028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.981060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.981099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.981128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.981154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.981186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.981219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.981246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.981278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.981308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.981335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.981373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.981405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.981433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.981459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.981486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.981516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.981546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.981576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.981604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.981633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.981665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.981694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.981723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.981753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.981786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.981814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.981866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.981896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.981924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.981953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.981981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.982009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.982042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.279 [2024-07-24 19:47:11.982079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.982107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.982135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.982166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.982195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.982227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.982254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.982283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.982308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.982340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.982371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.982406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.982436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.982464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.982493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.982522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.982552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.982580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.982612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.982642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.982672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.982702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.982731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.982859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.982888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.982916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.982946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.982973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.983003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.983034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.983063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.983092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.983122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.983149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.983429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.983459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.983490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.983518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.983546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.983576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:24.280 [2024-07-24 19:47:11.983606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.983637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.983667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.983695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.983722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.983750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.983779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.983818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.983847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.983877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.983906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.983936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.983970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.984000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.984030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.984063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.984092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.984123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.984151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.984185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.984219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.984248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.984289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.984320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.984348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.984375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.984404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.984437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.984466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.984499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.984536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.984564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.984594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.984624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.984654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.984686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.984715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.984743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.984773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.984802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.984835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.280 [2024-07-24 19:47:11.984865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.984893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.984921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.984950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.984976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.985438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.985469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.985500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.985530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.985559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.985588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.985617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.985653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.985682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.985714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.985743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.985770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.985808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.985837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.985868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.985897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.985926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.985960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.985988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.986037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.986069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.986100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.986130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.986158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.986186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.986222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.986249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.986279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.986307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.986339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.986372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.986399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.986424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.986454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.986483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.986512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.986538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.986569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.986599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.986630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.986659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.986690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.986718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.986746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.986774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.986804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.986833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.986862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.986893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.986922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.986954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.986991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.987021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.987051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.987084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.987113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.987142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.987173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.987206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.987235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.987269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.987302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.987330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.987354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.987509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.987550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.987586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.987612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.987642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.987676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.987707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.987736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.987766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.987794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.987822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.988062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.988091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.988118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.988147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.988174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.988209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.988238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.988266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.988294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.988330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.988362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.988390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.281 [2024-07-24 19:47:11.988424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.988452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.988486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.988517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.988548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.988576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.988605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.988633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.988695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.988724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.988752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.988780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.988809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.988857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.988883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.988911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.988941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.988970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.988996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.989026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.989053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.989080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.989114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.989147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.989171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.989206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.989238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.989270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.989296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.989323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.989352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.989384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.989416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.989450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.989474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.989498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.989530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.989558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.989596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.989625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.989994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.990025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.990053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.990082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.990111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.990143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.990176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.990212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.990241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.990269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.990305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.990338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.990363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.990392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.990423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.990452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.990484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.990515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.990547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.990576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.990605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.990632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.990663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.990690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.990748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.990779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.990807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.990836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.990866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.990895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.990927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.990955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.990983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.991011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.991285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.991318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.991349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.991378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.991407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.991438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.991466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.991506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.991545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.991580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.991608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.991633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.991664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.282 [2024-07-24 19:47:11.991694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.991726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.991753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.991782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.991810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.991838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.991868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.991898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.991927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.991954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.991984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.992015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.992047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.992076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.992105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.992133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.992163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.992196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.992232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.992260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.992295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.992324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.992359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.992386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.992418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.992448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.992475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.992536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.992566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.992595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.992626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.992653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.992684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.992712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.992760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.992789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.992818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.992847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.992879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.992915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.992944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.992971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.992994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.993023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.993058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.993301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.993334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.993363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.993393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.993425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.993453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.993482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.993513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.993548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.993579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.993607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.993640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.993668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.993700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.993730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.993758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.993803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.993832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.993865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.993896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.993931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.993960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.993986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.994011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.994042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.994077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.994112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.994140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.994168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.994204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.994231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.994264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.994294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.994323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.994354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.994380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.994414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.994445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.994499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.994527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.994557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.994586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.994633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.994661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.994692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.283 [2024-07-24 19:47:11.994721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.994747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.994780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.994814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.994839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.994863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.994889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.994918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.994947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.994973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.995006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.995028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.995054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.995080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.995108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.995140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.995172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.995198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.995227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.995359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.995393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.995419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.995449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.995476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.995712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.995740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.995769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.995796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.995827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.995855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.995887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.995916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.995947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.995975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.996028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.996056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.996086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.996114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.996150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.996180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.996212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.996241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.996269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.996300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.996327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.996372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.996402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.996431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.996460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.996487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.996518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.996545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.996580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.996612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.996649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.996681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.996707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.996735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.996764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.996791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.996821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.996846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.996877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.996905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.996932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.996958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.996986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.997015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.997042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.997068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.997095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.997121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.997149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.997177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.997208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.997245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.997274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.997319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.997351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.997380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.997409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.997441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.997800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.997834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.997862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.997892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.997920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.997950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.997978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.284 [2024-07-24 19:47:11.998008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.998034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.998062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.998089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.998113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.998142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.998175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.998211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.998245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.998270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.998297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.998325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.998356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.998385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.998414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.998446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.998475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.998504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.998534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.998561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.998589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.998619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.998649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.998676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.998706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.998738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.998768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.999048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.999079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.999111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.999167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.999197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.999231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.999259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.999290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.999319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.999348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.999383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.999419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.999448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.999478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.999507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.999535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.999562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.999591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.999629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.999663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.999692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.999720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.999747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.999775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.999798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.999827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.999857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.999887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.999916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.999947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:11.999977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:12.000004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:12.000031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:12.000061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:12.000093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:12.000125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:12.000153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:12.000185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:12.000219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.285 [2024-07-24 19:47:12.000248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.000277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.000301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.000330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.000359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.000386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.000421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.000451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.000480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.000512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.000542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.000572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.000613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.000642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.000673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.000700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.000728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.000757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.000787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.000831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.000862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.000892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.000918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.000947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.000976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.001105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.001136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.001165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.001211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.001242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.001419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.001450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.001479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.001507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.001537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.001576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.001607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.001635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.001663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.001690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.001719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.001749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.001782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.001817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.001850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.001877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.001908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.001939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.001966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.001995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.002022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.002055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.002085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.002113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.002154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.002184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.002219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.002262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.002292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.002352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.002381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.002416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.002474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.002501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.002529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.002557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.002583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.002618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.002646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.002678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.002705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.002738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.002770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.002800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.002829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.002858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.002888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.002915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.286 [2024-07-24 19:47:12.002947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.002975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.003003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.003033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.003061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.003089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.003116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.003141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.003169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.003208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.003630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.003661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.003692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.003721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.003751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.003782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.003810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.003839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.003863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.003891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.003921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.003948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.003977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.004006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.004039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.004071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.004104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.004132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.004163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.004194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.004229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.004262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.004291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.004319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.004347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.004385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.004416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.004445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.004477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.004504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.004532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.004562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.004617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.004648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.004680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.004713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.004740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.004787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.004816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.004845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.004881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.004910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.004939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.004966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.004994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.005024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.005055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.005079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.005111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.005143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.005170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.005198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.005232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.005263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.005296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.005328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.005361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.005391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.005419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.005451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.005478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.005507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.005536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.005568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.005698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.005723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.005753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.005786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.005815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.006050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.006085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.006118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.006146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.287 [2024-07-24 19:47:12.006178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.006214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.006249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.006279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.006305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.006342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.006372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.006403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.006433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.006462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.006494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.006522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.006559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.006587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.006620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.006648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.006676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.006703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.006733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.006761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.006795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.006831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.006860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.006885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.006914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.006942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.006971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.006998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.007025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.007055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.007084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.007116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.007150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.007184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.007214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.007244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.007275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.007303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.007332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.007360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.007388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.007417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.007445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.007480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.007508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.007536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.007566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.007596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.007621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.007649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.007678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.007706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.007734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.007763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.008131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.008189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.008219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.008249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.008280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.008312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.008342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.008371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.008400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.008430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.008461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.008490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.008520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.008547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.008576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.008606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.008637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.008665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.008694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.008726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.008755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.008782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.008814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.008848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.008877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.008902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.008930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.008963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.008994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.009026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.288 [2024-07-24 19:47:12.009055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.009084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.009113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.009141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.009172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.009206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.009234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.009265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.009293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.009320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.009349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.009378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.009412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.009444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.009477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.009510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.009541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.009568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.009594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.009619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.009647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.009680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.009711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.009739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.009769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.009799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.009828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.009866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.009897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.009927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.009955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.009983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.010013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.010045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.010183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.010220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.010250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.010279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.010311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.010551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.010576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.010605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.010635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.010665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.010691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.010722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.010753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.010806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.010835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.010864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.010897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.010927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.010968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.010998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.011033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.011063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.011090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.011127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.011156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.011183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.011215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.011248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.011277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.011308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.011337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.011364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.011392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.011419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.011448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.011476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.011510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.011539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.011566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.011596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.011628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.011661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.011689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.011719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.011753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.011784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.011813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.011843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.011873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.011903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.289 [2024-07-24 19:47:12.011931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.011966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.011995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.012025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.012055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.012085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.012117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.012146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.012177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.012210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.012239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.012269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.012297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.012661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.012695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.012724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.012762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.012788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.012817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.012847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.012874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.012912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.012943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.012973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.013002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.013031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.013061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.013092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.013123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.013152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.013181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.013215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.013246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.013277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.013304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.013334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.013362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.013390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.013424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.013453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.013495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.013524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.013552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.013578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.013605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.013633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.013661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.013693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.013720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.013749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.013778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.013813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.013844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.013871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.013901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.013932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.013962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.013992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.014020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.014048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.014078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.014109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.014165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.014197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.014231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.014263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.014289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.014342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.014374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.014402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.014431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.014459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.014492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.290 [2024-07-24 19:47:12.014520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.014555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.014585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.014614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.014750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.014783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.014811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.014840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.014875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.015115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.015147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.015172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.015215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.015247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.015279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.015309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.015338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.015370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.015400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.015437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.015466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.015495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.015525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.015552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.015609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.015642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.015670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.015703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.015734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.015768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.015798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.015827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.015855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.015883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.015918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.015947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.015976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.016004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.016035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.016069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.016099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.016128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.016158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.016188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.016225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.016254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.016283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.016315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.016348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.016379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.016415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.016445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.016477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.016506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.016535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.016567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.016597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.016626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.016656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.016692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.016727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.016759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.016789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.016818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.016855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.016885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.016912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.017506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.017538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.017569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.017604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.017631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.017662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.017689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.017718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.017749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.017777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.017809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.017839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.017868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.017906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.017937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.017968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.017997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.018027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.291 [2024-07-24 19:47:12.018059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.018089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.018118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.018151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.018181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.018214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.018258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.018287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.018317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.018345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.018375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.018402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.018428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.018455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.018486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.018516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.018546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.018575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.018604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.018631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.018658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.018692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.018720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.018748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.018776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.018804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.018835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.018861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.018901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.018931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.018959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.018989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.019018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.019051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.019081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.019113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.019138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.019162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.019186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.019214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.019245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.019276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.019308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.019337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.019367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.019398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.019532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.019564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.019592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.019619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.019662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:24.292 [2024-07-24 19:47:12.019903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.019943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.019972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.020000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.020033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.020065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.020093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.020122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.020154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.020185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.020220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.020250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.020282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.020313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.020345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.020378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.020422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.020450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.020480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.020511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.020540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.020572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.020600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.020634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.020665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.020694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.020722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.020751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.020779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.020814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.020844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.020875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.020902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.020933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.020963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.020992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.021022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.021056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.021084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.021113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.021142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.021171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.021210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.292 [2024-07-24 19:47:12.021242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.021273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.021302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.021329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.021361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.021392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.021420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.021446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.021474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.021513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.021543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.021571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.021600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.021630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.021660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.022020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.022050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.022079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.022103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.022135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.022166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.022197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.022233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.022265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.022296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.022324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.022362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.022390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.022420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.022450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.022478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.022507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.022536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.022566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.022595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.022625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.022656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.022686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.022713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.022742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.022771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.022801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.022833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.022862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.022892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.022930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.022958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.022989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.023020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.023049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.023084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.023113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.023142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.023178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.023212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.023243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.023280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.023309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.023337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.023366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.023390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.023422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.023455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.023482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.023510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.023537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.023572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.023608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.023637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.023665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.023693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.023725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.023755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.023787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.023818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.023849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.023876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.023903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.023938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.024072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.024103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.024134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.024165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.024195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.024440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.024473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.024502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.024531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.024560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.024602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.024634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.024662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.024690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.024718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.024754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.024785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.024813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.024843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.024873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.024902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.293 [2024-07-24 19:47:12.024933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.024963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.024996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.025025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.025053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.025083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.025114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.025143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.025172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.025208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.025239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.025274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.025303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.025333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.025371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.025398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.025429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.025460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.025490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.025525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.025555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.025584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.025639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.025668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.025698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.025730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.025759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.025783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.025814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.025843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.025875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.025903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.025933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.025960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.025988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.026023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.026054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.026081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.026110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.026141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.026172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.026211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.026578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.026609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.026639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.026667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.026695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.026724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.026751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.026779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.026811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.026846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.026879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.026904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.026935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.026965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.027000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.027031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.027056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.027088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.027119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.027148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.027178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.027212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.027242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.027271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.027299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.027333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.027364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.027395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.027425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.027454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.027484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.027521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.027554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.027585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.027615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.027643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.027672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.027702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.027732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.027763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.027794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.027845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.027874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.027902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.027933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.027959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.028001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.028031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.028065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.028100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.028129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.028159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.028187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.028224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.028260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.028289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.028318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.028343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.028378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.028408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.028435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.294 [2024-07-24 19:47:12.028464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.028495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.028523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.028676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.028707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.028737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.028767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.028798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.029032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.029062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.029093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.029122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.029152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.029188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.029222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.029252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.029280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.029312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.029355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.029384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.029414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.029450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.029483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.029510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.029538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.029563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.029592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.029624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.029654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.029681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.029715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.029745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.029773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.029808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.029843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.029875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.029907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.029934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.029963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.029992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.030022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.030049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.030105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.030134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.030162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.030191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.030226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.030254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.030284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.030314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.030344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.030377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.030412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.030441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.030472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.030504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.030532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.030568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.030597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.030625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.030651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.030684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.030715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.030745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.030775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.030807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.031167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.031197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.031232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.031261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.031291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.031320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.031350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.031379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.031418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.031448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.031477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.031506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.031536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.031564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.031594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.031624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.031653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.031681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.031712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.031741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.031773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.031804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.295 [2024-07-24 19:47:12.031834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.031863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.031891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.031922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.031968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.031996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.032026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.032067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.032097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.032129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.032161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.032187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.032220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.032251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.032282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.032311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.032342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.032373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.032402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.032429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.032466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.032497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.032526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.032558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.032587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.032619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.032650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.032677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.032712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.032743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.032776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.032805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.032834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.032861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.032891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.032919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.032955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.032987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.033017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.033045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.033075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.033102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.033256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.033288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.033320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.033350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.033380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.033625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.033686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.033717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.033746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.033777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.033806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.033840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.033872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.033903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.033930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.033962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.033994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.034023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.034054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.034083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.034114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.034144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.034172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.034207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.034238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.034269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.034300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.034330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.034357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.034391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.034425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.034457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.034487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.034515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.034544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.034578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.034606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.034636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.034667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.034697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.034722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.034753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.034781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.034819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.034849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.034879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.034910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.034938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.034980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.035005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.035036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.035069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.035098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.035132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.035162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.296 [2024-07-24 19:47:12.035192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.035223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.035254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.035288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.035317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.035347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.035378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.035411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.035819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.035852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.035882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.035925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.035954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.035986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.036016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.036044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.036086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.036117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.036148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.036176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.036210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.036253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.036283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.036311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.036348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.036379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.036407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.036434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.036463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.036491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.036519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.036547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.036578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.036605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.036634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.036663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.036693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.036724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.036753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.036786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.036817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.036848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.036885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.036916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.036946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.036973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.037001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.037030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.037058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.037086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.037118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.037147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.037177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.037212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.037240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.037272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.037303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.037331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.037361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.037386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.037418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.037446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.037475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.037504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.037536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.037567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.037596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.037624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.037665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.037697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.037724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.037754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.037884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.037913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.037952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.037982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.038010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.038361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.038390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.038418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.038449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.038477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.038508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.038536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.038564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.038598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.297 [2024-07-24 19:47:12.038627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.038664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.038694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.038731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.038760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.038790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.038821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.038851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.038879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.038910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.038940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.038973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.039000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.039028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.039058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.039089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.039118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.039148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.039178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.039213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.039245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.039276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.039304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.039333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.039359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.039396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.039425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.039454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.039482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.039507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.039535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.039564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.039593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.039632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.039660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.039688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.039718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.039747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.039778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.039807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.039839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.039870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.039900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.039929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.039957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.039988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.040018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.040046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.040076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.040464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.040499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.040528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.040561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.040590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.040621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.040653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.040683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.040712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.040739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.040769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.040800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.040833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.040860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.040890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.040919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.040955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.040980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.041010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.041040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.041070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.041096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.041124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.041153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.041193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.041224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.041253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.041283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.041314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.041345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.041375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.041406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.041436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.041466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.041495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.041528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.041556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.041584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.041617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.041650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.041683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.041716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.041742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.041770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.041801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.298 [2024-07-24 19:47:12.041831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.041861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.041890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.041920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.041951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.041983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.042012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.042067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.042097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.042126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.042157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.042187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.042226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.042257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.042283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.042322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.042352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.042384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.042414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.042541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.042571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.042601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.042628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.042657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.042886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.042923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.042951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.042977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.043007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.043035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.043063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.043094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.043123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.043155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.043184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.043218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.043251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.043283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.043312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.043345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.043375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.043407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.043438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.043465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.043494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.043522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.043567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.043596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.043624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.043651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.043678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.043707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.043738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.043768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.043800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.043827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.043873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.043901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.043930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.043958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.043989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.044020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.044052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.044079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.044110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.044142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.044169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.044205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.044233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.044261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.044293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.044322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.044353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.044386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.044419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.044450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.044479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.044507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.044533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.044567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.044600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.044629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.045233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.045265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.045294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.045323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.045352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.045383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.045415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.045442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.045474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.045505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.045534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.299 [2024-07-24 19:47:12.045565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.045596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.045633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.045662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.045692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.045722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.045751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.045782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.045810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.045839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.045865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.045895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.045925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.045957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.045985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.046015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.046046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.046073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.046102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.046133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.046160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.046191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.046223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.046247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.046282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.046315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.046343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.046380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.046409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.046439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.046470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.046506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.046538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.046568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.046595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.046624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.046651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.046680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.046714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.046747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.046784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.046814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.046841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.046872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.046901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.046928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.046956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.046980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.047005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.047028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.047053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.047076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.047106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.047247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.047276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.047306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.047336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.047366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.047607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.047637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.047666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.047695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.047736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.047768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.047798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.047829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.047861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.047892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.047922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.047949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.047980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.048008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.048038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.048067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.048095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.048129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.048158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.048185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.048218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.048257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.048286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.048316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.300 [2024-07-24 19:47:12.048343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.048375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.048405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.048432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.048459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.048487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.048514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.048543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.048570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.048596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.048637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.048662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.048691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.048724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.048756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.048783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.048814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.048843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.048873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.048899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.048929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.048960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.048988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.049031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.049063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.049091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.049124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.049151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.049182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.049214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.049239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.049271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.049301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.049331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.049702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.049733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.049762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.049793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.049825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.049854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.049884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.049915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.049947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.049976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.050004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.050034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.050062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.050093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.050125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.050154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.050181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.050222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.050251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.050283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.050311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.050344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.050373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.050406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.050435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.050466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.050492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.050520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.050545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.050569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.050593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.050618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.050641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.050667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.050695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.050724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.050751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.050784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.050815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.050846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.050877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.050908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.050939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.050968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.051007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.051037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.051066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.051095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.051126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.051154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.051189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.051223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.051254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.051282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.051312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.051342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.301 [2024-07-24 19:47:12.051370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.051410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.051438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.051470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.051496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.051526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.051556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.051586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.051746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.051777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.051807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.051839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.051868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.052105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.052140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.052168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.052197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.052231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.052262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.052293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.052326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.052357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.052391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.052426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.052455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.052493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.052523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.052550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.052578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.052609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.052638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.052674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.052703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.052731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.052759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.052784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.052815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.052848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.052879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.052908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.052938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.052966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.052996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.053027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.053054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.053084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.053124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.053154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.053185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.053222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.053252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.053281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.053311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.053339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.053372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.053402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.053433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.053464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.053494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.053522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.053563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.053596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.053625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.053675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.053704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.053733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.053761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.053789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.053822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.053850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.053882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.054273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.054301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.054331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.054362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.054390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.054419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.054449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.054475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.054505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.054535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.054567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.054596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.054627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.054656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.054688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.054718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.054747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.054779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.054807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.054836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.054872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.054900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.054930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.054968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.302 [2024-07-24 19:47:12.054996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.055024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.055054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:24.303 [2024-07-24 19:47:12.055093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.055120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.055145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.055174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.055211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.055242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.055268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.055297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.055329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.055359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.055390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.055423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.055456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.055486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.055513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.055540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.055568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.055597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.055626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.055656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.055686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.055713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.055746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.055778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.055808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.055835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.055869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.055899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.055930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.055960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.055989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.056022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.056053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.056082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.056120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.056149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.056180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.056311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.056344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.056371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.056403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.056439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.056692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.056722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.056750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.056778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.056809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.056842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.056873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.056903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.056935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.056967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.056998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.057029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.057057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.057093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.057120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.057148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.057181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.057216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.057244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.057276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.057308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.057337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.057366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.057397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.303 [2024-07-24 19:47:12.057427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.057455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.057483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.057519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.057552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.057581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.057609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.057635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.057666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.057694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.057722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.057749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.057783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.057811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.057839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.057872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.057904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.057933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.057962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.057992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.058022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.058049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.058079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.058108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.058139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.058171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.058212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.058241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.058271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.058299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.058328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.058363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.058391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.058423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.058785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.058816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.058843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.058868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.058896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.058926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.058955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.058983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.059018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.059046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.059078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.059106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.059139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.059173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.059208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.059240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.059269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.059297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.059324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.059364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.059395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.059425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.059479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.059507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.059564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.059594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.059624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.059652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.059684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.059712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.059741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.059790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.059820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.059849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.059879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.059907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.059937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.059966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.059993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.060020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.060056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.060091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.060115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.060146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.060176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.060210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.060246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.060277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.060305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.060336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.060368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.060402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.060433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.060463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.060492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.060525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.060554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.060585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.060615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.060663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.060692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.060722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.060752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.060780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.060917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.060944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.060975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.061006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.061036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.061281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.061309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.061332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.061362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.061392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.061421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.061450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.061492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.061521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.061548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.061573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.061605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.061633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.061662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.061689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.061717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.061747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.061779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.061806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.061834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.061869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.061897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.061927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.061962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.061992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.062030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.062059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.062090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.062121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.062151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.062194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.062226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.062256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.062286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.062314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.062346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.062376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.062407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.062434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.062463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.062492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.062521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.062548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.062580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.062605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.062634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.062664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.062694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.304 [2024-07-24 19:47:12.062722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.062749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.062790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.062823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.062853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.062881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.062911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.062943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.062973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.063000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.063412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.063449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.063479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.063509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.063537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.063568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.063600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.063629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.063677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.063707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.063737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.063771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.063808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.063836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.063865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.063894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.063919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.063949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.063980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.064008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.064035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.064068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.064106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.064141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.064169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.064208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.064240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.064268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.064298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.064325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.064354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.064381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.064410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.064440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.064469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.064498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.064529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.064561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.064592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.064622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.064652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.064679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.064709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.064739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.064772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.064802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.064831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.064861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.064889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.064949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.064978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.065016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.065046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.065075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.065104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.065127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.065156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.065186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.065219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.065256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.065287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.065315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.065347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.065376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.065520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.065551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.065578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.065608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.065637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.065893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.065921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.065956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.065986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.066015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.066062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.066093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.066121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.066149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.066176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.066207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.066235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.066262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.066290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.066318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.066343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.066372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.066399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.066429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.066456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.066483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.305 [2024-07-24 19:47:12.066515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.066542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.066568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.066592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.066620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.066649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.066678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.066705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.066733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.066761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.066789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.066816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.066860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.066889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.066916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.066948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.066976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.067003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.067031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.067060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.067089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.067136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.067164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.067196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.067228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.067255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.067291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.067321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.067365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.067394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.067428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.067456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.067485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.067516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.067546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.067574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.067603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.068135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.068166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.068197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.068232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.068262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.068292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.068324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.068354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.068384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.068411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.068440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.068471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.068501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.068534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.068564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.068592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.068629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.068656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.068692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.068721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.068751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.068781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.068809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.068839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.068868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.068896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.068923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.068952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.068981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.069015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.069043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.069071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.069098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.069128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.069158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.069185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.069220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.069250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.069280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.069306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 true 00:07:24.306 [2024-07-24 19:47:12.069331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.069366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.069396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.069426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.069456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.069489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.069518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.069546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.069576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.069607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.069637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.069666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.069695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.069726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.069755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.069790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.069821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.069849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.069880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.306 [2024-07-24 19:47:12.069909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.069945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.069973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.070001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.070035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.070159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.070192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.070230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.070255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.070287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.070587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.070620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.070650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.070676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.070723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.070751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.070780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.070810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.070843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.070872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.070901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.070928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.070956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.070986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.071036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.071066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.071097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.071124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.071156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.071185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.071216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.071245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.071273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.071302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.071330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.071358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.071388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.071415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.071443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.071471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.071501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.071526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.071556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.071579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.071607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.071635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.071663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.071691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.071720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.071746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.071775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.071804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.071834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.071867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.071895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.071925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.071952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.071983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.072015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.072051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.072079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.072112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.072140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.072168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.072197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.072232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.072261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.072289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.072664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.072698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.072732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.072764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.072793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.072820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.072847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.072875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.072902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.072933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.072956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.072983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.073010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.073036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.073063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.073094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.073122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.073151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.073180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.073209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.073239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.073266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.073293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.073322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.073351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.073379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.073407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.073435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.307 [2024-07-24 19:47:12.073467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.073497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.073523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.073551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.073577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.073611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.073638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.073667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.073694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.073725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.073754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.073783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.073815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.073846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.073874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.073902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.073929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.073957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.073988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.074015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.074042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.074069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.074096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.074120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.074149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.074180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.074214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.074244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.074273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.074301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.074340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.074373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.074399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.074432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.074461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.074487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.074612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.074641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.074673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.074701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.074743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.074992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.075023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.075050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.075080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.075106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.075135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.075163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.075191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.075223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.075246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.075274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.075302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.075329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.075358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.075384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.075411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.075437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.075464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.075494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.075522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.075552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.075585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.075611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.075641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.075673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.075702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.075749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.075778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.075807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.075838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.075869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.075901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.075930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.075961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.075990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.076017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.076046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.076074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.076103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.076132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.076165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.076197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.076231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.076262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.076291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.076320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.076350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.076384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.076415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.076443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.076467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.076499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.076530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.076559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.076588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.308 [2024-07-24 19:47:12.076617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.076647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.076675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.077059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.077092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.077123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.077151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.077182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.077215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.077251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.077279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.077307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.077361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.077391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.077419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.077447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.077473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.077509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.077539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.077570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.077602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.077632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.077666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.077705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.077739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.077770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.077798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.077826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.077853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.077883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.077910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.077939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.077969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.077997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.078027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.078058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.078089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.078117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.078147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.078175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.078210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.078241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.078272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.078302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.078333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.078365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.078395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.078425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.078456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.078516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.078545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.078576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.078611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.078643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.078673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.078702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.078731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.078762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.078793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.078822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.078851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.078880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.078911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.078939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.078970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.078996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.079026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.079154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.079184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.079222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.079251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.079281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.079553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.079586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.079614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.079646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.079674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.079710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.079740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.079792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.079824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.079853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.309 [2024-07-24 19:47:12.079885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.079913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.079945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.079973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.080033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.080063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.080092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.080119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.080148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.080178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.080210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.080244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.080272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.080301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.080326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.080357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.080389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.080418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.080444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.080479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.080506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.080535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.080565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.080594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.080626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.080655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.080684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.080713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.080746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.080775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.080804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.080835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.080867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.080896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.080923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.080952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.080982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.081009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.081038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.081066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.081095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.081124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.081151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.081177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.081205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.081236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.081267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.081294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.081683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.081714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.081740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.081764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.081793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.081822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.081853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.081883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.081914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.081942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.081971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.082004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.082037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.082068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.082097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.082133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.082162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.082189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.082233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.082265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.082294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.082322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.082349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.082380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.082408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.082450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.082479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.082511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.082540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.082571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.082601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.082630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.082660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.082688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.082720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.082749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.082782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.082813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.082843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.082870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.082894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.082927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.082958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.082986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.083014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.083048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.083079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.083110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.310 [2024-07-24 19:47:12.083139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.083169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.083195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.083227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.083258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.083288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.083317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.083344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.083395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.083426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.083455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.083486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.083517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.083565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.083596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.083627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.083766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.083798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.083828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.083854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.083884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.084111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.084140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.084168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.084198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.084231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.084264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.084291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.084326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.084351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.084375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.084406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.084434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.084463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.084495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.084526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.084558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.084587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.084617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.084647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.084696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.084727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.084760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.084789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.084817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.084859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.084888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.084919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.084951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.084980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.085028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.085057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.085087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.085125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.085154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.085188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.085220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.085248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.085281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.085311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.085341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.085377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.085406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.085437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.085464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.085498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.085526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.085554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.085589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.085617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.085648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.085675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.085705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.085729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.085761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.085793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.085825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.085862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.085893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.086301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.086333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.086363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.086390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.086418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.086445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.086477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.086507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.086534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.086567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.086595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.086625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.086655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.086686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.086719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.086755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.086784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.311 [2024-07-24 19:47:12.086811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.086839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.086867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.086893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.086917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.086945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.086976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.087004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.087032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.087063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.087092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.087123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.087151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.087178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.087209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.087237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.087266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.087292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.087320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.087372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.087402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.087429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.087460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.087489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.087517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.087547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.087576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.087601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.087631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.087658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.087687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.087714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.087745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.087775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.087806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.087835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.087865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.087894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.087924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.087952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.087987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.088013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.088045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.088073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.088104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.088129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.088157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.088296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.088328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.088361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.088388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.088418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.088649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.088679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.088706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.088739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.088769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.088797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.088827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.088858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.088885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.088921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.088952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.088982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.089010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.089041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.089072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.089102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.089131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.089159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.089187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.089240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.089269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.089299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.089328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.089357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.089385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.089412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.089440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.089469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.089503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.089541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.089572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.089601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.089632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.089667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.089697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.089727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.089758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.089787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.089817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.089847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.089875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.089909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.312 [2024-07-24 19:47:12.089937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.089975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.090005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.090036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.090066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.090096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.090125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.090154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.090181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.090212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.090245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.090274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.090303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.090334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.090361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.090390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.090752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.090785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.090817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.090848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.090881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.090908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.090937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.090966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.091002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.091035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.091063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:24.313 [2024-07-24 19:47:12.091087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.091119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.091150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.091177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.091208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.091241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.091270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.091299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.091328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.091357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.091385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.091416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.091446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.091474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.091523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.091552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.091581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.091613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.091642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.091693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.091724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.091754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.091782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.091812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.091845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.091876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.091903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.091933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.091962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.091991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.092020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.092049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.092077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.092102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.092128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.092157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.092184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.092222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.092253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.092279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.092661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.092690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.092724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.092754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.092785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.092814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.092843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.092875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.092905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.092933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.092964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.092993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.093033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.093062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.313 [2024-07-24 19:47:12.093096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.093124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.093155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.093184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.093219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.093250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.093278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.093309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.093333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.093363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.093395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.093425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.093454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.093485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.093514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.093542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.093574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.093604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.093635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.093662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.093690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.093719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.093747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.093781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.093811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.093862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.093891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.093919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.093949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.093977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.094008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.094037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.094064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.094092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.094135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.094163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.094193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.094226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.094261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.094296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.094326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.094356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.094384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.094411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.094437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.094466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.094495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.094522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.094550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.094581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.094724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.094757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.094788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.094819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.094848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.094878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.094906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.094936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.094964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.094992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.095020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.095047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.095287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.095345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.095372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.095400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.095428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 19:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:24.314 [2024-07-24 19:47:12.095456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.095508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.095537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.095567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.095596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.095625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.095654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.095685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.095715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.095740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.095773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.095805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 19:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.314 [2024-07-24 19:47:12.095835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.095863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.095891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.095929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.095960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.095990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.096019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.096046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.096072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.096102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.096130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.096159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.096192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.096224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.096254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.314 [2024-07-24 19:47:12.096289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.096319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.096353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.096384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.096411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.096439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.096468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.096502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.096531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.096561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.096594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.096621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.096654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.096697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.096735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.096770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.096802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.096826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.096858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.097313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.097343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.097374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.097403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.097433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.097461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.097498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.097528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.097559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.097588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.097617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.097647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.097677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.097710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.097741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.097768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.097799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.097828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.097855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.097885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.097914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.097944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.097976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.098005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.098036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.098065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.098093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.098124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.098151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.098184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.098218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.098243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.098274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.098310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.098338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.098367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.098396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.098435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.098463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.098492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.098520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.098548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.098578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.098606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.098636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.098669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.098697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.098727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.098757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.098786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.098815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.098846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.098877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.098905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.098933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.098968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.098997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.099026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.099058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.099089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.099119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.099149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.099180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.099212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.099345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.099376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.099405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.099433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.099461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.099486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.099516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.099546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.099575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.099602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.099633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.099663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.315 [2024-07-24 19:47:12.099914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.099945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.100000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.100029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.100058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.100087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.100119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.100156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.100184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.100217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.100274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.100304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.100335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.100368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.100398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.100428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.100460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.100486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.100516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.100546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.100594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.100623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.100655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.100691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.100718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.100749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.100777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.100804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.100833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.100861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.100891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.100921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.100951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.100978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.101006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.101035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.101061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.101089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.101120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.101150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.101180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.101215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.101246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.101274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.101305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.101333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.101364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.101394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.101422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.101459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.101489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.101863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.101893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.101923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.101957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.101993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.102017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.102044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.102071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.102102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.102132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.102160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.102192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.102225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.102256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.102286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.102314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.102353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.102383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.102411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.102440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.102468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.102500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.102530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.102567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.102596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.102625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.102660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.102689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.102718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.102747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.102777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.102809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.102838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.102868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.102899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.102927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.102956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.102983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.103016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.103051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.103080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.103114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.103146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.103175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.316 [2024-07-24 19:47:12.103207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.103232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.103263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.103291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.103322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.103351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.103380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.103408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.103435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.103464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.103494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.103524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.103554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.103591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.103619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.103648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.103674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.103701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.103728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.103759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.103889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.103918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.103947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.103991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.104023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.104053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.104082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.104110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.104142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.104173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.104206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.104237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.104476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.104514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.104547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.104576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.104604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.104631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.104660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.104690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.104723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.104753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.104784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.104814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.104840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.104870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.104897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.104929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.104959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.104988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.105018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.105047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.105079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.105110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.105139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.105168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.105193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.105228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.105261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.105297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.105321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.105349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.105381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.105411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.105442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.105471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.105500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.105533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.105560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.105586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.105616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.105640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.105674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.105708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.105736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.105764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.105795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.105825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.105874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.105904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.105934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.105968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.105996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.106039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.106406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.106436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.106465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.106496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.106531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.106562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.106594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.106621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.106659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.106689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.106715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.317 [2024-07-24 19:47:12.106745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.106773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.106805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.106834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.106867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.106896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.106925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.106955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.106987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.107017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.107044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.107073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.107103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.107132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.107165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.107195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.107227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.107260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.107288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.107316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.107347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.107375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.107406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.107436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.107467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.107494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.107523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.107552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.107583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.107614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.107642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.107672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.107705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.107733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.107794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.107825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.107858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.107888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.107919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.107947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.107976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.108004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.108029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.108065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.108090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.108120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.108149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.108182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.108219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.108247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.108274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.108302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.108330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.108487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.108520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.108548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.108582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.108610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.108655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.108687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.108715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.108750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.108779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.108808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.109046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.109079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.109109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.109139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.109168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.109195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.109231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.109260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.109288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.109315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.109340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.109371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.109400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.109433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.109465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.109493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.318 [2024-07-24 19:47:12.109520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.109547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.109575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.109600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.109629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.109657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.109688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.109718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.109748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.109777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.109804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.109834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.109863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.109897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.109925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.109957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.109990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.110021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.110052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.110085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.110112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.110141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.110183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.110220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.110247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.110278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.110308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.110339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.110371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.110400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.110429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.110459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.110490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.110522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.110551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.110579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.111060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.111091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.111119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.111156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.111183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.111227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.111258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.111289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.111318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.111348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.111380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.111408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.111435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.111475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.111506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.111533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.111560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.111588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.111616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.111644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.111675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.111707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.111735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.111761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.111794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.111835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.111864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.111895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.111927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.111957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.111983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.112012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.112043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.112077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.112108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.112137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.112167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.112196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.112238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.112269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.112298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.112329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.112359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.112390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.112420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.112450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.112480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.112509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.112538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.112568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.112600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.112628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.112656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.112684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.112713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.112740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.112777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.112804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.112840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.112871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.319 [2024-07-24 19:47:12.112900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.112928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.112957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.112984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.113112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.113138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.113164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.113191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.113223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.113253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.113280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.113311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.113340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.113366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.113393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.113650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.113681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.113711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.113746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.113779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.113808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.113838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.113869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.113913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.113943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.113973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.114002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.114032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.114078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.114109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.114139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.114167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.114198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.114232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.114266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.114293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.114325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.114359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.114389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.114415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.114445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.114474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.114503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.114531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.114560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.114588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.114617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.114645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.114679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.114710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.114738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.114771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.114801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.114827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.114856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.114885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.114917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.114946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.114974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.115002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.115031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.115059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.115088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.115118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.115149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.115181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.115212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.115685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.115714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.115742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.115771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.115795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.115827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.115854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.115882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.115910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.115939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.115965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.115990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.116025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.116056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.116085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.116115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.116143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.116179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.116214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.116241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.116274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.116301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.116331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.116362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.116396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.116424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.116454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.116486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.116512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.116539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.320 [2024-07-24 19:47:12.116570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.116598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.116628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.116658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.116686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.116722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.116756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.116789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.116815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.116843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.116873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.116907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.116936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.116968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.116998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.117029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.117061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.117092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.117117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.117140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.117165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.117194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.117230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.117258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.117288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.117321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.117350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.117381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.117410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.117441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.117481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.117511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.117552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.117581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.117712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.117752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.117783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.117812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.117840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.117869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.117902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.117933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.117962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.117992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.118022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.118260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.118292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.118326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.118357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.118387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.118417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.118444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.118473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.118501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.118528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.118553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.118581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.118613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.118651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.118683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.118715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.118742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.118776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.118806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.118834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.118867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.118896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.118931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.118961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.118991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.119021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.119051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.119084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.119114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.119143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.119193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.119226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.119255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.119285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.119315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.119345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.119373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.119401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.119428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.119457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.119490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.119519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.119550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.119582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.119612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.119641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.119671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.119700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.119733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.119762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.119788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.119816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.321 [2024-07-24 19:47:12.120170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.120199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.120242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.120273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.120297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.120326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.120356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.120381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.120409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.120440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.120468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.120492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.120516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.120546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.120575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.120605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.120629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.120652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.120678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.120701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.120724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.120747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.120771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.120795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.120827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.120854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.120881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.120909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.120939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.120964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.120988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.121019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.121049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.121082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.121112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.121137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.121159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.121183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.121210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.121233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.121256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.121279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.121303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.121331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.121368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.121398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.121429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.121457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.121487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.121515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.121567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.121597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.121622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.121660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.121689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.121715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.121744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.121776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.121805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.121832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.121860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.121888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.121917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.121945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.122076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.122109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.122138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.122177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.122212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.122241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.122272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.122300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.122337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.122369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.122399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.122648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.122680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.122712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.122742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.122771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.122803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.122831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.122859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.122891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.122920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.122950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.122981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.123011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.123039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.123073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.123107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.123136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.123171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.123204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.123236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.123268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.123298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.322 [2024-07-24 19:47:12.123326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.123355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.123390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.123430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.123462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.123489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.123518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.123545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.123571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.123601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.123631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.123662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.123693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.123720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.123748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.123777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.123807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.123835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.123864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.123891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.123922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.123973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.124002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.124032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.124064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.124092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.124137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.124167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.124199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.124235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.124614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.124643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.124672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.124701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.124731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.124775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.124804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.124833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.124866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.124904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.124933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.124960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.124987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.125016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.125044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.125072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.125107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.125145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.125177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.125210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.125240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.125266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.125301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.125330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.125360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.125389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.125424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.125457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.125491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.125521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.125550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.125580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.125609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.125639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.125668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.125699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.125729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.125757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.125787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.125817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.125845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.125875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.125914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.125943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.125974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.126004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.126035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.126084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.126112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.126141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.323 [2024-07-24 19:47:12.126177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.126209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.126240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.126268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.126297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.126337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.126367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.126397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.126427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.126457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.126506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.126537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.126572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.126601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.126903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.126936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.126964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.126994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.127022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.127053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.127083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.127112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.127136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.127166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.127194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.127230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.127261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.127292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.127320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.127347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.127377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.127404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.127428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.127451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.127475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:24.324 [2024-07-24 19:47:12.127499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.127523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.127556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.127588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.127619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.127647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.127677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.127704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.127728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.127751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.127777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.127806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.127834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.127864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.127893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.127917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.127941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.127966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.127990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.128015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.128039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.128062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.128086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.128113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.128140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.128171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.128204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.128233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.128284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.128315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.128344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.128375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.128405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.128438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.128469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.128505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.128543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.128578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.128607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.128633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.128665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.128688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.129263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.129290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.129320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.129349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.129377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.129414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.129441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.129469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.129500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.129528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.129557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.129584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.129614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.129642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.129673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.129701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.129734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.129763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.129793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.324 [2024-07-24 19:47:12.129822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.129849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.129882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.129911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.129940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.129969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.129998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.130028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.130059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.130088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.130115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.130144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.130175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.130206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.130243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.130278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.130308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.130338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.130362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.130393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.130423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.130454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.130483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.130513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.130543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.130570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.130597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.130627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.130655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.130695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.130726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.130761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.130791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.130819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.130848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.130876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.130915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.130943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.130984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.131013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.131044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.131074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.131103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.131134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.131162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.131519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.131549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.131578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.131613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.131646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.131677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.131705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.131736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.131767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.131797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.131833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.131863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.131890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.131919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.131947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.131976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.132005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.132032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.132062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.132089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.132118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.132146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.132174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.132206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.132243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.132271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.132300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.132330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.132367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.132398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.132427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.132455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.132486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.132518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.132547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.132574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.132605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.132637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.132667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.132697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.132727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.132776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.132806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.132837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.132867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.132895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.132922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.132946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.132974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.133006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.133036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.133066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.325 [2024-07-24 19:47:12.133094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.133126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.133155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.133183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.133216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.133248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.133276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.133305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.133337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.133363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.133397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.133735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.133767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.133799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.133831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.133861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.133891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.133924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.133955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.133982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.134013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.134043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.134071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.134100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.134128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.134160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.134188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.134222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.134259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.134288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.134319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.134350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.134378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.134418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.134450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.134481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.134510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.134540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.134568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.134595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.134624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.134653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.134679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.134712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.134744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.134773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.134801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.134831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.134858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.134886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.134917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.134945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.134976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.135007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.135035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.135065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.135094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.135127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.135155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.135187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.135217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.135247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.135274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.135316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.135346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.135376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.135399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.135429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.135459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.135490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.135520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.135551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.135585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.135614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.135650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.136000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.136030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.136064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.136094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.136124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.136151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.136178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.136213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.136241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.136276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.136310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.136338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.136363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.136392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.136420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.136451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.136480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.136510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.136541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.136573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.326 [2024-07-24 19:47:12.136601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.136634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.136669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.136698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.136729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.136758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.136788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.136817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.136846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.136875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.136904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.136936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.136966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.136997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.137315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.137344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.137380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.137407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.137447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.137475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.137502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.137533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.137565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.137598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.137631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.137661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.137689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.137713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.137744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.137772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.137801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.137830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.137856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.137887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.137921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.137954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.137983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.138009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.138038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.138069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.138099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.138128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.138159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.138185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.138217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.138248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.138281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.138312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.138342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.138370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.138406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.138435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.138466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.138493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.138519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.138549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.138576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.138604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.138644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.138669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.138956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.138985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.139016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.139046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.139076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.139111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.139138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.139194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.327 [2024-07-24 19:47:12.139228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.139257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.139290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.139321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.139356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.139385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.139414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.139447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.139478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.139509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.139540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.139569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.139599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.139628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.139663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.139692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.139722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.139751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.139781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.139812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.139852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.139880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.139906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.139937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.139970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.139997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.140023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.140052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.140082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.140111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.140140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.140174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.140216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.140246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.140274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.140300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.140328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.140356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.140387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.140416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.140444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.140475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.140504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.140533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.140561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.140596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.140624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.140670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.140700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.140748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.140775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.140805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.140833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.140859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.140893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.140924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.141055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.141085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.141113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.141145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.141168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.141199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.141234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.141276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.141306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.141337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.141367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.141399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.141427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.141458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.141488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.141518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.141547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.141770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.141801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.141832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.141862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.141891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.141947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.141975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.142008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.142037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.142065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.142095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.142125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.142156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.142186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.142217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.142257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.142286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.142314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.142343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.142372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.142401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.142431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.142458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.142487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.328 [2024-07-24 19:47:12.142515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.142550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.142577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.142605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.142635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.142663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.142693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.142724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.142750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.142778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.142807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.142832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.142867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.142895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.142927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.142956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.142984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.143012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.143036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.143064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.143094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.143121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.143150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.143524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.143554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.143585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.143614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.143642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.143668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.143700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.143729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.143756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.143780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.143811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.143841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.143871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.143902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.143937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.143963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.143994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.144023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.144050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.144098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.144129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.144156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.144186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.144223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.144251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.144281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.144310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.144362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.144392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.144422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.144451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.144481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.144512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.144540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.144568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.144596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.144626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.144653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.144680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.144707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.144738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.144767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.144802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.144827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.144858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.144891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.144919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.144947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.144977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.145004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.145032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.145065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.145093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.145125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.145155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.145186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.145219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.145248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.145276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.145304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.145331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.145358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.145388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.145416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.145547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.145576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.145609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.145638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.145686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.145715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.145746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.145777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.145807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.329 [2024-07-24 19:47:12.145834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.145865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.145901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.145934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.145968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.145995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.146022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.146549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.146579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.146607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.146632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.146672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.146704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.146734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.146766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.146798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.146828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.146856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.146885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.146913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.146939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.146968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.146997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.147030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.147060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.147088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.147120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.147149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.147185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.147220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.147250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.147278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.147306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.147338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.147368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.147395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.147424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.147453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.147482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.147510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.147537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.147569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.147593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.147623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.147654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.147681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.147709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.147739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.147768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.147796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.147837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.147872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.147899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.147929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.147959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.147990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.148016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.148049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.148078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.148109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.148135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.148161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.148188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.148220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.148251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.148284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.148312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.148345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.148372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.148407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.148436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.148623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.148664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.148697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.148721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.148753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.148782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.148809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.148841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.148870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.148900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.148932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.148960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.148990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.149018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.149046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.149075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.149102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.149130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.149160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.149188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.149224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.149254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.149282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.149314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.149344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.149379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.330 [2024-07-24 19:47:12.149406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.149435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.149466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.149492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.149534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.149565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.149593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.149623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.149652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.149681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.149707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.149734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.149763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.149792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.149818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.149846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.149879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.149908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.149938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.149978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.150013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.150041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.150069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.150098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.150126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.150155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.150185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.150217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.150247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.150275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.150306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.150340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.150367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.150398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.150426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.150455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.150509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.150860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.150892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.150924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.150952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.150983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.151012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.151042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.151072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.151101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.151129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.151165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.151193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.151226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.151262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.151290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.151320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.151350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.151379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.151407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.151435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.151465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.151496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.151524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.151553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.151581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.151616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.151647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.151676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.151707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.151735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.151768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.151795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.151824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.151859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.151887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.151922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.151953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.151983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.152013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.152042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.152075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.152115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.152147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.152177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.152210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.152236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.152269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.152299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.152329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.152361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.152399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.152431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.152458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.152490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.331 [2024-07-24 19:47:12.152516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.152545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.152583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.152617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.152648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.152678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.152707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.152738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.152768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.152797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.153154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.153183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.153216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.153254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.153283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.153312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.153350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.153380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.153408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.153442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.153471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.153501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.153530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.153554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.153582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.153611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.153641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.153669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.153702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.153735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.153766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.153795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.153824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.153856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.153884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.153913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.153942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.153974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.154002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.154034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.154063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.154093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.154122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.154151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.154181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.154214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.154260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.154290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.154319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.154352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.154380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.154410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.154440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.154472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.154503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.154531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.154563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.154591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.154619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.154646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.154677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.154716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.154751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.154776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.154809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.154840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.154870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.154905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.154938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.154973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.155000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.155029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.155055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.155439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.155467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.155493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.155538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.155567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.155598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.155628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.155657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.155687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.332 [2024-07-24 19:47:12.155715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.155745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.155773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.155798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.155825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.155858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.155885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.155918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.155951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.155978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.156009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.156039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.156070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.156099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.156127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.156175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.156210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.156238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.156274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.156306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.156339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.156369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.156398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.156425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.156453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.156491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.156520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.156548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.156581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.156612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.156666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.156694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.156725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.156754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.156784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.156827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.156854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.156883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.156912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.156952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.156980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.157005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.157038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.157070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.157099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.157128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.157155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.157188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.157220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.157247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.157279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.157306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.157336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.157385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.157414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.157982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.158012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.158041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.158070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.158099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.158132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.158160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.158205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.158238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.158268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.158295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.158323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.158359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.158390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.158421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.158453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.158484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.158522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.158551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.333 [2024-07-24 19:47:12.158580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.158610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.158637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.158677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.158707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.158741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.158770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.158803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.158833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.158861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.158891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.158919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.158951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.158982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.159010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.159040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.159068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.159125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.159153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.159188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.159226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.159254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.159280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.159311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.159342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.159372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.159402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.159431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.159461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.159493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.159527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.159562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.159586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.159614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.159645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.159674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.159703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.159730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.159758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.159787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.159815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.159850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.159884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.159924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.160316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.160347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.160378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.160407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.160436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.160465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.160497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.160526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.160557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.160586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.160614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.160642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.160676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.160703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.160752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.160779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.160810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.160840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.160869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.160902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.160934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.160962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.334 [2024-07-24 19:47:12.160995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.161027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.161058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.161088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.161116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.161144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.161172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.161216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.161246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.161282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.161312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.161340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.161370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.161397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.161426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.161452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.161478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.161516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.161539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.161569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.161599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.161629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.161657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.161689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.161719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.161749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.161776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.161807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.161834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.161864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.161895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.161925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.161957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.161987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.162014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.162069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.162100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.162130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.162161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.162190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.162223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.162253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.162732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.162763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.162794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.162822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.162854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.162884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.162940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.162970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.163002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.163033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.163060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.163093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.163121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.163149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.163179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.163217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.163248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.163272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.163303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.163333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.163367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.163400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 [2024-07-24 19:47:12.163431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.335 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:24.335 [2024-07-24 19:47:12.163461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.163490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.163521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.163552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.163579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.163607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.163631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.163654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.163678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.163701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.163725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.163749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.163772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.163796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.163820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.163844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.163866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.163890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.163913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.163942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.163970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.164002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.164035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.164064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.164095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.164123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.164153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.164190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.164220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.164250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.164281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.164311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.164342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.164372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.164401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.164431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.164459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.164496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.164528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.164592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.164936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.164969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.164996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.165025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.165053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.165088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.165119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.165146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.165174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.165209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.165236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.165294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.165322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.165350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.165378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.165407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.165443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.165476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.165503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.165535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.165563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.165594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.165623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.165654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.165684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.165712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.336 [2024-07-24 19:47:12.165745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.165774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.165809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.165838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.165865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.165896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.165923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.165965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.165993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.166020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.166046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.166076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.166109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.166137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.166169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.166199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.166233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.166267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.166294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.166324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.166353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.166382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.166424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.166453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.166480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.166508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.166536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.166562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.166603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.166640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.166669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.166696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.166720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.166753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.166781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.166808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.166842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.166879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.167214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.167246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.167276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.167304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.167334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.167378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.167407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.167438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.167466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.167494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.167525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.167553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.167582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.167613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.167643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.167674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.167701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.167729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.167768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.167802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.167832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.167859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.167892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.167921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.167949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.167981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.168009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.168039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.168066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.168097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.168125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.168160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.168193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.168228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.168257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.168293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.168321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.168351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.168378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.168408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.168436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.168460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.168490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.168523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.168551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.168579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.168606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.168648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.168677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.168704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.168735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.168767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.168794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.168822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.168850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.168878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.168904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.337 [2024-07-24 19:47:12.168927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.168951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.168974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.168998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.169021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.169046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.169390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.169420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.169451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.169479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.169509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.169540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.169568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.169603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.169632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.169661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.169688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.169718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.169745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.169772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.169803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.169834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.169864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.169893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.169923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.169955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.169986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.170017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.170044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.170074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.170102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.170135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.170164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.170193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.170226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.170256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.170286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.170315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.170342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.170371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.170398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.170424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.170452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.170484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.170513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.170542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.170570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.170598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.170627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.170658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.170692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.170722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.170751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.170781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.170812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.170841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.170868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.170907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.170937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.170969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.170997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.171024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.171060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.171092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.171124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.171163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.171191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.171242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.171271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.171301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.171653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.171682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.171713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.171749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.171782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.171811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.171842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.171873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.171902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.171929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.171957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.171993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.172025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.338 [2024-07-24 19:47:12.172052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.172080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.172106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.172133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.172159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.172183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.172214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.172254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.172283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.172317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.172342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.172367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.172397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.172432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.172461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.172490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.172518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.172545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.172577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.172609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.172638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.172667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.172697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.172724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.172755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.172784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.172815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.172853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.172882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.172921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.172950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.172979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.173009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.173040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.173086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.173115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.173146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.173175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.173513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.173543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.173572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.173599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.173628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.173662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.173692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.173721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.173747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.173784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.173814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.173843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.173871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.173897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.173926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.173954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.173982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.174011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.174040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.174079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.174109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.174134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.174159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.174186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.339 [2024-07-24 19:47:12.174219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.174249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.174278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.174309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.174343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.174371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.174399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.174428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.174459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.174487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.174512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.174535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.174560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.174589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.174617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.174646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.174672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.174698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.174722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.174746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.174771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.174796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.174821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.174847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.174876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.174904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.174933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.174964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.174992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.175023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.175054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.175090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.175118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.175148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.175195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.175229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.175259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.175286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.175315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.175347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.175591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.175624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.175654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.175682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.175713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.175743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.175772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.175801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.175829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.175888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.175919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.175947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.176395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.176425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.176449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.176478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.176509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.176538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.176564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.176596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.176625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.176649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.176684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.176710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.176740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.176776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.176806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.176835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.176868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.176895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.176924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.176954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.176984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.177015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.177044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.177081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.177111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.629 [2024-07-24 19:47:12.177140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.177168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.177198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.177231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.177261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.177293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.177324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.177352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.177381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.177413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.177442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.177480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.177519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.177548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.177580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.177607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.177630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.177663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.177691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.177721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.177750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.177781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.177812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.177843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.177873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.177903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.178067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.178102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.178133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.178162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.178191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.178227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.178256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.178285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.178314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.178348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.178378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.178408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.178437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.178468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.178500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.178530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.178562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.178592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.178623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.178654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.178684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.178714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.178747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.178777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.178805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.178834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.178863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.178896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.178926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.178955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.178987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.179016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.179043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.179074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.179101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.179128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.179156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.179183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.179213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.179244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.179274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.179308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.179344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.179374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.179402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.179430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.179460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.179487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.179526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.179562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.179587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.179614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.179647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.179688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.179717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.179745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.179775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.179804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.179834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.179862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.179887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.179916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.179946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.179972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.180087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.180112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.180135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.180159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.180191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.180224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.630 [2024-07-24 19:47:12.180254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.180285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.180314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.180347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.180378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.180408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.180650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.180685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.180739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.180770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.180799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.180830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.180859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.180892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.180921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.180948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.180979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.181008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.181039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.181067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.181103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.181137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.181170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.181195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.181232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.181265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.181298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.181324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.181353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.181383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.181410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.181450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.181476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.181503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.181531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.181560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.181590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.181614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.181637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.181661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.181684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.181709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.181734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.181756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.181788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.181819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.181854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.181885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.181912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.181940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.181964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.181988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.182011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.182039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.182068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.182096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.182128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.182502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.182530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.182557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.182586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.182614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.182645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.182676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.182704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.182736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.182763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.182794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.182831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.182858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.182909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.182940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.182968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.182996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.183026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.183068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.183099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.183134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.183169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.183197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.183230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.183260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.183288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.183319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.183346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.183374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.183402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.183438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.183466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.183501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.183525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.183554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.183585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.183615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.183643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.183671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.183701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.631 [2024-07-24 19:47:12.183730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.183759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.183789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.183820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.183851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.183886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.183918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.183948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.183975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.184018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.184048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.184077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.184109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.184139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.184174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.184206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.184236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.184263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.184292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.184325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.184354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.184384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.184418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.184447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.184578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.184615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.184643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.184678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.184706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.184736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.184767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.184796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.184829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.184858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.184887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.184921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.185146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.185179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.185223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.185255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.185284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.185313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.185342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.185369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.185400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.185432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.185460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.185493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.185524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.185556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.185583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.185611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.185640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.185672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.185701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.185729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.185758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.185787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.185817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.185846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.185877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.185906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.185936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.185964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.185993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.186034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.186067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.186095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.186122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.186152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.186183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.186217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.186246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.186278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.186308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.186342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.186371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.186402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.186435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.186466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.186501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.186531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.186558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.186587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.186615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.186644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.186675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.187073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.187107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.187143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.187177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.187211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.187240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.187272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.187301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.632 [2024-07-24 19:47:12.187328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.187356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.187383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.187413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.187443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.187474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.187505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.187533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.187557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.187584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.187613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.187645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.187674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.187703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.187728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.187758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.187782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.187809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.187840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.187869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.187899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.187929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.187960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.187991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.188019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.188048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.188081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.188110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.188143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.188175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.188207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.188236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.188263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.188300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.188330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.188358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.188388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.188419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.188449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.188480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.188510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.188539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.188568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.188601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.188631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.188667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.188699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.188728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.188758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.188794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.188824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.188854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.188882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.188911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.188940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.188968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.189111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.189141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.189171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.189208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.189233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.189256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.189279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.189304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.189328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.189352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.189382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.189415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.189641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.189672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.189703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.189739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.189765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.189795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.189824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.189853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.189887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.633 [2024-07-24 19:47:12.189917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.189948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.189978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.190010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.190040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.190073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.190102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.190134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.190165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.190195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.190238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.190270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.190298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.190330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.190361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.190389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.190417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.190448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.190479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.190508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.190538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.190567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.190594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.190623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.190654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.190680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.190707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.190737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.190769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.190800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.190827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.190856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.190890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.190923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.190957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.190988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.191015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.191047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.191074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.191105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.191134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.191169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.191624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.191656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.191685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.191718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.191747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.191776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.191806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.191834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.191866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.191897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.191926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.191955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.191986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.192030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.192060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.192090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.192119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.192149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.192177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.192209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.192240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.192272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.192304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.192333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.192360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.192392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.192421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.192453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.192482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.192516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.192545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.192582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.192612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.192641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.192668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.192697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.192734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.192759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.192787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.192822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.192860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.192889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.192919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.192948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.192973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.193004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.193035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.193065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.193098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.193132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.193159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.193193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.193227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.193258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.193286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.634 [2024-07-24 19:47:12.193316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.193345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.193368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.193399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.193428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.193452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.193476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.193499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.193531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.193675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.193707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.193736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.193765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.193794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.193824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.193854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.193883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.193909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.193935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.193968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.193997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.194244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.194270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.194294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.194317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.194342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.194366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.194390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.194415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.194441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.194465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.194489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.194513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.194544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.194575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.194606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.194637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.194669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.194699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.194729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.194758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.194788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.194817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.194848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.194880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.194909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.194936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.194960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.194984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.195007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.195031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.195056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.195080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.195104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.195128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.195152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.195176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.195203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.195227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.195251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.195274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.195299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.195323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.195348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.195372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.195396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.195420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.195443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.195469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.195493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.195517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.195542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.195674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.196114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.196144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.196188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.196223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.196253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.196290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.196319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.196348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.196379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.196410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.196441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.196492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.196520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.196549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.196601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.196629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.196659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.196700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.196729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.196763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.196801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.196831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.196858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.196888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.635 [2024-07-24 19:47:12.196917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.196949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.196975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.197002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.197032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.197066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.197094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.197123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.197152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.197180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.197216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.197247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.197276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.197307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.197339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.197367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.197405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.197434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.197464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.197500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.197529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.197557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.197585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.197615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.197644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.197670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.197974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.198007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.198037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.198063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.198091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.198123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.198153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.198182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.198226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.198256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.198284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.198318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.198349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.198377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.198429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.198458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.198486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.198515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.198545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.198575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.198606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:24.636 [2024-07-24 19:47:12.198635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.198672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.198701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.198729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.198759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.198787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.198818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.198847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.198874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.198905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.198935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.198963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.198989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.199020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.199055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.199084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.199113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.199142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.199171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.199198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.199232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.199262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.199295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.199326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.199363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.199392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.199420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.199447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.199478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.199507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.199547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.199576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.199604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.199640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.199675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.199704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.199734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.199764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.199796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.199824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.199855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.199882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.199913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.200063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.200090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.200120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.200144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.200167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.200192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.636 [2024-07-24 19:47:12.200221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.200245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.200268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.200290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.200315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.200339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.200364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.200583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.200617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.200647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.200674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.200705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.200735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.200760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.200783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.200807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.200832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.200855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.200878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.200901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.200924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.200948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.200972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.200995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.201019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.201043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.201072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.201103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.201135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.201163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.201192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.201228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.201253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.201277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.201301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.201324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.201355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.201385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.201418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.201450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.201474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.201498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.201522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.201546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.201570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.201593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.201618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.201642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.201666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.201690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.201715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.201740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.201762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.201787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.201811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.201836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.201860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.202120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.202146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.202169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.202192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.202220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.202245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.202270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.202298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.202326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.202356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.202385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.202414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.202442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.202475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.202507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.202534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.202565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.202674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.202707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.202741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.202769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.202800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.202830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.202858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.202895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.202927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.202956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.202992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.203021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.203051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.203081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.203108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.203138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.203166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.203197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.203230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.203260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.203293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.203322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.203351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.637 [2024-07-24 19:47:12.203378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.203405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.203434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.203465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.203493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.203525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.203557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.203587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.203616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.203642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.203670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.203700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.203732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.203761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.203789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.203816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.203850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.203890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.203915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.203943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.203979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.204007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.204037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.204067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.204398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.204434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.204466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.204493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.204526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.204556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.204588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.204616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.204645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.204678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.204706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.204738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.204767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.204794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.204830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.204860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.204892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.204926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.204955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.204985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.205011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.205042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.205070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.205098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.205127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.205160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.205190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.205222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.205252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.205280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.205311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.205343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.205370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.205397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.205427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.205460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.205488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.205517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.205553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.205580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.205609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.205637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.205664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.205690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.205721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.205756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.205782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.205812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.205841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.205872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.205904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.205934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.205969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.205997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.206025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.206056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.206084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.638 [2024-07-24 19:47:12.206112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.206136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.206164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.206192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.206223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.206253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.206284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.206429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.206456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.206481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.206505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.206533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.206562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.206594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.206624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.206653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.206677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.206700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.206725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.206749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.206775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.206800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.206830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.207045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.207073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.207096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.207124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.207154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.207181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.207212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.207237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.207261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.207285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.207310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.207335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.207359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.207382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.207407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.207431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.207457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.207481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.207504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.207528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.207552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.207575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.207600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.207624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.207648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.207671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.207695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.207721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.207756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.207786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.207816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.207849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.207880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.207913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.207939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.207964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.207988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.208011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.208035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.208060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.208084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.208108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.208131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.208156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.208180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.208208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.208233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.208483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.208509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.208532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.208556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.208580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.208605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.208630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.208663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.208692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.208722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.208753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.208782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.208815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.208842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.208869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.208903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.208932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.209053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.209082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.209117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.209149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.209181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.209217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.209294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.209324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.209357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.209386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.209416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.639 [2024-07-24 19:47:12.209482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.209514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.209545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.209572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.209601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.209636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.209669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.209699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.209729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.209758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.209787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.209816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.209845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.209878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.209907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.209936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.209966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.209995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.210026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.210056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.210089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.210118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.210145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.210177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.210209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.210238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.210266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.210297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.210326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.210351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.210381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.210409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.210442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.210472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.210504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.210533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.210834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.210862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.210914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.210945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.210973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.211008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.211037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.211067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.211097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.211126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.211161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.211190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.211225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.211258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.211287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.211321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.211351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.211379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.211408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.211436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.211471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.211502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.211530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.211560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.211593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.211621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.211671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.211704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.211736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.211776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.211805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.211836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.211873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.211902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.211933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.211962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.211991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.212019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.212056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.212086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.212117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.212144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.212169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.212197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.212237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.212264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.212295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.212327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.212356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.212386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.212659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.212691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.212725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.212753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.212784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.212814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.212845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.212875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.212907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.212938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.212963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.640 [2024-07-24 19:47:12.212987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.213011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.213034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.213058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.213083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.213108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.213137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.213164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.213194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.213228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.213253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.213285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.213311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.213334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.213358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.213381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.213408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.213437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.213462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.213485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.213508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.213532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.213557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.213581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.213607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.213637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.213666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.213692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.213718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.213749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.213774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.213797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.213822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.213846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.213870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.213899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.213923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.213946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.213969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.213993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.214017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.214041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.214065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.214088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.214113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.214138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.214162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.214186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.214213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.214238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.214262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.214286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.214310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.214443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.214473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.214504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.214534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.214564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.214590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.214625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.214656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.214688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.214719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.214746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.214775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.214806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.215023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.215056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.215083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.215118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.215149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.215180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.215216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.215244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.215273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.215302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.215339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.215371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.215400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.215430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.215462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.215493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.215522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.215550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.215579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.215610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.215659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.215688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.215720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.215750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.215779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.215807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.215840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.215871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.215900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.215930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.641 [2024-07-24 19:47:12.215965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.215994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.216022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.216055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.216083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.216115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.216146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.216176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.216214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.216241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.216270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.216301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.216333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.216366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.216395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.216423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.216455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.216484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.216514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.216544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.216925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.216957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.216988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.217016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.217052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.217092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.217119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.217153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.217185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.217219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.217247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.217277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.217308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.217337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.217367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.217395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.217423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.217454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.217486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.217520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.217552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.217581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.217611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.217640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.217669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.217702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.217733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.217762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.217805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.217836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.217866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.217896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.217927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.217959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.218228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.218258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.218291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.218325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.218356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.218383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.218408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.218439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.218469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.218503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.218532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.218562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.218589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.218624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.218654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.218682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.218712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.218748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.218775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.218807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.218838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.218867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.218898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.218928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.218957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.218980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.219008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.219032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.219057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.219081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.219105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.219140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.219172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.219206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.219235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.219262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.219289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.219314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.642 [2024-07-24 19:47:12.219339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.219369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.219399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.219433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.219461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.219490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.219527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.219558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.219585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.219621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.219651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.219694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.219725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.219756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.219786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.219815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.219861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.219889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.219922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.219952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.219986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.220014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.220042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.220071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.220099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.220125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.220298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.220329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.220362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.220391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.220421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.220452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.220481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.220511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.220548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.220581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.220608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.220641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.220671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.220700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.220730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.220759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.220790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.220824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.220854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.220883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.220915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.220944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.220989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.221017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.221045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.221074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.221105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.221146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.221177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.221593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.221623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.221653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.221686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.221719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.221747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.221773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.221806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.221839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.221871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.221902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.221933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.221966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.221995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.222023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.222051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.222081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.222141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.222172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.222204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.222237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.222269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.222298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.222330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.222360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.222394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.222426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.222454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.222489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.222519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.222550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.222585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.643 [2024-07-24 19:47:12.222613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.222644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.222675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.222705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.222734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.222766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.222796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.222825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.222853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.222881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.222913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.222943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.222972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.223003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.223032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.223060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.223089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.223117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.223146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.223399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.223433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.223463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.223491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.223520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.223551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.223586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.223615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.223643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.223675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.223704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.223737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.223769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.223796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.223827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.223855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.223882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.223907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.223933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.223962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.223990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.224020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.224052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.224091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.224119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.224149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.224177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.224212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.224243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.224269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.224296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.224327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.224356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.224386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.224417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.224451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.224482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.224513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.224541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.224573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.224603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.224632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.224666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.224697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.224724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.224757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.224785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.224815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.224845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.224875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.224909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.224942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.224971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.225000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.225031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.225057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.225088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.225117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.225156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.225184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.225217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.225246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.225275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.225305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.225452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.225483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.225512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.225542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.225573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.225603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.225632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.225661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.225691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.225719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.225750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.225780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.226011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.226043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.644 [2024-07-24 19:47:12.226073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.226102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.226132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.226162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.226193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.226226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.226257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.226291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.226322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.226352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.226381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.226408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.226440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.226470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.226501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.226530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.226567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.226599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.226628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.226656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.226682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.226712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.226742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.226775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.226806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.226837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.226865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.226899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.226930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.226976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.227006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.227036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.227078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.227109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.227141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.227205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.227238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.227266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.227301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.227332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.227360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.227388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.227418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.227448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.227481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.227508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.227539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.227570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.227600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.228000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.228032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.228061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.228091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.228120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.228148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.228177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.228217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.228246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.228278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.228311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.228341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.228370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.228401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.228429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.228459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.228486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.228514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.228545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.228573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.228611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.228639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.228670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.228700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.228728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.228758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.228786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.228813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.228837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.228869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.228898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.228929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.228956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.228984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.229021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.229053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.229083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.229110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.229134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.229166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.229197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.229233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.229259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.229290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.229320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.229351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.229381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.645 [2024-07-24 19:47:12.229442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.229473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.229501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.229530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.229561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.229595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.229624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.229652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.229680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.229709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.229740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.229775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.229805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.229832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.229858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.229888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.229918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.230052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.230084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.230116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.230147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.230180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.230215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.230246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.230278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.230309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.230351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.230380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.230410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.230674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.230704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.230733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.230762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.230792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.230824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.230854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.230885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.230919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.230949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.230980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.231016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.231048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.231078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.231109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.231139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.231169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.231197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.231229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.231259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.231287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.231326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.231357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.231394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.231422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.231448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.231477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.231512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.231542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.231570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.231599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.231626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.231663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.231697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.231724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.231755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.231783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.231812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.231845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.231873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.231901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.231928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.231956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.231985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.232016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.232045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.232074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.232104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.232137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.232165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.232196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.232577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.232617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.232647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.232673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.232703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.232734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.232762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.232791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.232820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.232855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.232884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.232913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.232942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.232972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.233002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.233032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 [2024-07-24 19:47:12.233058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.646 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:24.646 [2024-07-24 19:47:12.233087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.233114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.233143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.233174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.233210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.233247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.233277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.233309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.233337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.233368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.233396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.233424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.233453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.233486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.233515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.233544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.233572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.233602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.233645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.233676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.233704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.233734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.233762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.233803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.233829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.233858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.233888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.233917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.233947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.233983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.234012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.234043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.234070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.234099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.234129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.234159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.234189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.234224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.234255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.234287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.234315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.234343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.234373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.234410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.234438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.234464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.234496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.234627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.234654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.234695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.234728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.234758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.234786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.234812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.234846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.234875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.234902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.234932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.234966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.235266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.235299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.235329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.235357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.235386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.235415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.235443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.235471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.235508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.235536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.235564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.235594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.235623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.235670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.235701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.235732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.235762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.235790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.235823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.235852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.235879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.235910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.235939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.647 [2024-07-24 19:47:12.235972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.236002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.236029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.236058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.236091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.236124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.236148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.236178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.236213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.236244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.236271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.236298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.236327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.236359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.236394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.236423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.236452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.236480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.236510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.236539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.236570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.236600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.236630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.236661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.236691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.236719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.236749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.236779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.648 19:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.648 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.648 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.648 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.648 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.648 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.648 [2024-07-24 19:47:12.411488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.411531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.411562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.411588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.411619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.411648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.411676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.411702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.411733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.411759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.411786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.411812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.411839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.411866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.411896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.411925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.411956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.411984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.412010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.412036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.412065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.412094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.412127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.412161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.412184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.412219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.412250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.412276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.412304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.412329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.412355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.412384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.412413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.412446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.412476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.412503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.412531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.412556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.412587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.412610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.412633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.412656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.412679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.412702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.412724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.412747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.412770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.412793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.412817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.412849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.412882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.412912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.412942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.412973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.412996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.413018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.413042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.413064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.413087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.413112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.413140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.648 [2024-07-24 19:47:12.413165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.413192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.413223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.413585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.413617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.413648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.413678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.413707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.413736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.413766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.413792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.413825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.413861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.413893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.413920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.413955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.413993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.414021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.414061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.414098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.414128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.414162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.414190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.414223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.414247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.414269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.414293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.414316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.414341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.414372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.414399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.414431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.414459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.414488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.414520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.414549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.414577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.414603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.414630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.414657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.414687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.414714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.414742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.414780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.414808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.414856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.414886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.414916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.414943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.414972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.415002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.415036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.415067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.415096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.415124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.415151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.415184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.415217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.415251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.415280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.415309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.415342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.415368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.415396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.415423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.415463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.415804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.415833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.415861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.415889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.415914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.415954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.415987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.416012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.416042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.416068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.416099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.416129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.416157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.416191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.416223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.416252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.416281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.416308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.416336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.416364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.416396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.416424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.416472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.416501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.416532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.416561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.416594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.416622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.416649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.416679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.649 [2024-07-24 19:47:12.416707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.416738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.416766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.416793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.416820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.416850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.416881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.416930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.416960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.416989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.417019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.417046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.417073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.417099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.417128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.417154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.417183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.417209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.417237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.417266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.417299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.417337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.417374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.417408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.417441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.417465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.417490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.417516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.417541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.417568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.417594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.417622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.417653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.417680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.418019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.418044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.418066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.418097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.418129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.418157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.418188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.418224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.418251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.418281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.418308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.418340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.418369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.418396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.418423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.418453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.418481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.418508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.418537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.418563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.418594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.418623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.418651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.418683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.418712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.418741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.418768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.418800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.418829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.418855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.418881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.418908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.418935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.418963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.418991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.419018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.419045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.419069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.419099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.419132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.419160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.419187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.419216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.650 [2024-07-24 19:47:12.419243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.419276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.419305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.419331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.419358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.419385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.419418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.419446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.419468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.419498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.419522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.419545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.419567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.419592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.419615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.419638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.419660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.419684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.419706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.419730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.420091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.420121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.420145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.420176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.420208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.420236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.420265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.420293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.420319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.420350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.420378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.420407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.420435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.420468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.420492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.420522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.420548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.420577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.420603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.420633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.420659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.420693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.420721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.420752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.420780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.420834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.420862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.420897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.420925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.420964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.420994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.421029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.421057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.421091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.421118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.421145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.421174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.421205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.421232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.421261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.421287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.421315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.421340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.421368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.421396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.421423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.421449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.421476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.421500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.421527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.421554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.421582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.421608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.421636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.421665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.421692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.421720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.421752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.421780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.421808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.421837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.421868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.421897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.421922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.422295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.422325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.422378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.422408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.422436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.422468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.422497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.422528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.422555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.422582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.422612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.422641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.651 [2024-07-24 19:47:12.422668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.422699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.422728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.422759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.422788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.422816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.422846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.422875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.422903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.422931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.422960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.423006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.423035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.423082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.423112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.423139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.423169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.423213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.423243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.423272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.423299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.423329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.423358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.423386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.423413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.423442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.423470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.423500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.423526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.423553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.423577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.423618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.423653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.423687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.423721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.423753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.423786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.423816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.423844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.423871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.423900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.423927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.423959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.423996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.424023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.424050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.424076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.424103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.424127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.424152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.424178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.424592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.424620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.424648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.424677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.424705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.424733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.652 [2024-07-24 19:47:12.424763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.424785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.424817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.424848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.424871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.424900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.424929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.424961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.424990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.425989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.426016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.426045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.426075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.426102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.426136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.426164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.426193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.426478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.426502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.426524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.426547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.426569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.426592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.426613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.426637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.426663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.426694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.426721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.426750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.426778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.426812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.426839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.426866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.426895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.426966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.426994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.427022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.427054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.427084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.427140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.427168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.427195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.427229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.427257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.427288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.427315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.427344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.427372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.427401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.653 [2024-07-24 19:47:12.427430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.427456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.427494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.427531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.427569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.427606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.427644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.427684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.427709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.427736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.427767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.427794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.427817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.427848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.427873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.427902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.427931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.427960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.427988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.428020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.428048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.428079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.428108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:24.654 [2024-07-24 19:47:12.428141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.428168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.428202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.428233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.428260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.428289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.428318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.428346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.428729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.428773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.428799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.428837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.428864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.428893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.428920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.428951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.428976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.429024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.429056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.429086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.429118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.429145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.429174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.429206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.429240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.429269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.429302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.429331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.429361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.429389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.429416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.429447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.429477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.429505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.429535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.429564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.429595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.429623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.429654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.429682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.429714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.429751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.429788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.429818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.429848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.429874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.429905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.429933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.429958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.429989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.430014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.430043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.430074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.430109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.430150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.430178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.430212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.430250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.430278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.430306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.430330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.430359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.654 [2024-07-24 19:47:12.430387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.430418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.430456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.430495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.430534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.430572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.430600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.430627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.430656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.430684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.430990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.431015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.431043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.431075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.431099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.431123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.431146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.431171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.431194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.431222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.431247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.431277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.431308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.431337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.431366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.431391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.431419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.431442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.431468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.431491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.431514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.431538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.431568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.431599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.431631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.431658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.431686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.431709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.431733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.431757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.431780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.431804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.431828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.431852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.431876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.431900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.431924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.431948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.431972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.431995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.432025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.432052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.432084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.432112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.432143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.432172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.432206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.432232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.432256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.432280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.432303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.432328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.432353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.432377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.432401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.432424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.432449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.432472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.432496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.432521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.432544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.432568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.432592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.432867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.432893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.432917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.432940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.432964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.432988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.433011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.433034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.433058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.433083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.433106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.433130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.433153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.655 [2024-07-24 19:47:12.433177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.433206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.433238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.433269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.433499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.433530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.433560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.433593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.433623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.433651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.433678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.433709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.433735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.433769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.433800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.433831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.433862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.433888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.433917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.433946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.433985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.434013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.434041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.434070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.434100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.434143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.434173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.434208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.434238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.434268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.434300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.434331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.434359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.434390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.434420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.434450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.434480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.434509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.434553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.434582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.434610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.434639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.434670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.434695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.434724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.434751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.434777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.434805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.434834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.434868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.434905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.435278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.435312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.435341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.435369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.435428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.435457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.435491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.435520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.435549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.435576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.435604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.435633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.435661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.435690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.435717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.435748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.435779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.435806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.435838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.435865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.435893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.435922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.435952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.435982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.436016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.436046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.436077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.436105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.436136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.436163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.436194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.436229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.436264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.436293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.436322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.436364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.436393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.436422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.436451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.436479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.436511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.436540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.436567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.436594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.436621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.436650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.656 [2024-07-24 19:47:12.436689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.436719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.436749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.436777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.436804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.436832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.436864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.436893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.436920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.436948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.436976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.437004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.437032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.437061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.437089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.437119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.437145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.437170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.437304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.437333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.437362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.437391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.437421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.437450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.437478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.437502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.437529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.437555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.437586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.437614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.437639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.437668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.437692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.437716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.437921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.437945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.437968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.437992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.438016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.438039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.438063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.438095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.438128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.438157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.438185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.438219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.438244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.438277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.438301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.438324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.438348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.438373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.438397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.438422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.438445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.438469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.438493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.657 [2024-07-24 19:47:12.438517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.438540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.438564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.438592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.438624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.438653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.438683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.438707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.438730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.438754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.438777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.438801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.438825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.438848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.438872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.438896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.438920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.438944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.438970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.439004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.439032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.439065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.439095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.439129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.439441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.439466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.439497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.439528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.439562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.439591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.439618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.439647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.439674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.439716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.439748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.439778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.439806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.439837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.439869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.439897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.439929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.439959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.439989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.440020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.440050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.440078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.440107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.440137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.440166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.440196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.440231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.440260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.440290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.440330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.440358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.440385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.440411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.440439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.440469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.440497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.440523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.440552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.440582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.440612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.440641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.440669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.440723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.440751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.440782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.440811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.440843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.440874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.440902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.440935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.440963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.440990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.441020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.441047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 19:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:07:24.658 [2024-07-24 19:47:12.441078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.441108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.441135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.441164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.441192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.441229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.441260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.441290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.441321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.441350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 19:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:07:24.658 [2024-07-24 19:47:12.441652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.441683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.441713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.441742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.658 [2024-07-24 19:47:12.441777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.441806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.441834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.441866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.441895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.441922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.441954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.441987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.442014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.442040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.442069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.442102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.442139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.442169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.442198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.442229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.442261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.442292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.442321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.442355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.442388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.442417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.442445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.442473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.442508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.442539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.442568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.442594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.442624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.442655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.442681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.442710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.442739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.442769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.442798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.442828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.442858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.442895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.442925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.442955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.442989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.443020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.443065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.443093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.443123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.443150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.443181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.443218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.443248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.443279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.443314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.443343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.443382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.443411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.443441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.443470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.443495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.443526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.443555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.443908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.443942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.443973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.444003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.444036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.444060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.444084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.444109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.444133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.444156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.444188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.444221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.444253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.444282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.444314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.444345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.444371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.444401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.444429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.444457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.444487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.444519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.444548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.444576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.444607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.444636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.444674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.444703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.444739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.444770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.444800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.444825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.444851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.444879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.659 [2024-07-24 19:47:12.444910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.444940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.444971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.445003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.445034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.445065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.445095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.445127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.445157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.445184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.445224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.445254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.445285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.445313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.445342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.445388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.445414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.445448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.445479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.445507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.445533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.445564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.445594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.445624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.445652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.445678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.445705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.445735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.445764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.445796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.446100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.446129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.446161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.446191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.446243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.446274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.446303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.446333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.446362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.446417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.446445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.446474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.446501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.446530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.446559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.446590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.446631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.446907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.446940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.446968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.447000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.447031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.447060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.447088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.447117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.447147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.447176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.447211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.447240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.447271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.447303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.447331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.447363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.447393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.447423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.447454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.447489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.447521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.447551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.447582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.447609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.447638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.447670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.447701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.447727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.447756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.447782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.447810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.447839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.447870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.660 [2024-07-24 19:47:12.447898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.447931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.447971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.448008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.448036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.448062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.448091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.448124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.448150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.448178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.448209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.448240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.448274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.448444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.448476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.448509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.448539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.448570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.448597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.448630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.448659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.448690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.448721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.448750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.448783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.448812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.448839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.448868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.448895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.448922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.448950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.448985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.449020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.449053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.449081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.449105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.449132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.449161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.449199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.449230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.449260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.449287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.449317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.449348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.449378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.449407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.449434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.449464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.449492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.449519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.449546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.449573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.449603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.449635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.449667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.449695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.449723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.449747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.449774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.449804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.449832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.449862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.449889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.449919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.449947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.449975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.450002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.450027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.450055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.450081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.450105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.450129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.450153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.450176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.450209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.450237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.450266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.450396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.450428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.450457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.450489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.450519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.450547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.450581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.450610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.450645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.450673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.450710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.450739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.450769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.450794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.450820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.450850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.450881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.451543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.451605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.451632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.661 [2024-07-24 19:47:12.451662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.451695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.451722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.451756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.451787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.451817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.451849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.451876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.451906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.451938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.451975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.452002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.452031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.452061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.452089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.452120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.452148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.452176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.452207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.452239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.452267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.452295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.452321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.452350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.452379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.452404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.452431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.452465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.452496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.452526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.452553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.452580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.452611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.452637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.452665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.452692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.452722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.452757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.452789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.452818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.452846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.452876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.452912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.452943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.452975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.453007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.453037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.453064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.453095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.453123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.453151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.453177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.453208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.453237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.453273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.453308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.453336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.453366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.453392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.453421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.453599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.453629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.662 [2024-07-24 19:47:12.453658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.453690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.453717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.453745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.453773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.453802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.453831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.453861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.453893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.453920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.453950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.453978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.454005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.454037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.454065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.454120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.454149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.454179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.454215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.454245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.454277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.454310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.454338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.454366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.454393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.454434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.454463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.454497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.454526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.454555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.454585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.454614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.454643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.454672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.454700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.454727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.454757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.454784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.454813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.454842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.454879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.454910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.454940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.454966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.454991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.455019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.455048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.455075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.455106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.455133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.455162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.455190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.455222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.455256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.455286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.455317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.455345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.455380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.455410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.455439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.455470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.455499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.455908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.455941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.455971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.456000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.456026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.456054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.456085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.456114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.456143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.456171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.456203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.456237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.456268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.456298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.456329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.456359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.456388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.456420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.456449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.456476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.456508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.456536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.456567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.456594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.456625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.456653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.456703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.456730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.456760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.456793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.456823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.456850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.456880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.456904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.663 [2024-07-24 19:47:12.456935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.456965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.456994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.457024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.457054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.457085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.457114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.457147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.457181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.457211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.457238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.457266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.457295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.457319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.457349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.457376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.457405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.457433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.457461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.457492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.457527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.457559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.457591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.457620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.457649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.457680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.457709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.457759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.457788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.458138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.458168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.458198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.458229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.458264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.458302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.458327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.458360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.458390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.458419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.458449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.458481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.458512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.458544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.458574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.458602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.458631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.458661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.458691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.458725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.458754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.458785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.458814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.458845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.458876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.458905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.458933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.458964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.458993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.459040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.459070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.459101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.459130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.459159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.459189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.459221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.459252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.459280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.459311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.459353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.459382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.459411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.459444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.459473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.459518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.459545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.459577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.459607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.459633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.459664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.459691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.459720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.664 [2024-07-24 19:47:12.459746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.459775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.459807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.459836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.459868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.459900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.459933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.459962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.459990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.460018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.460045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.460072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.460550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.460582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.460612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.460640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.460674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.460702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.460734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.460762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.460791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.460822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.460852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.460882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.460911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.460949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.460981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.461014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.461049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.461077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.461110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.461139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.461168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.461205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.461231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.461260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.461306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.461334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.461363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.461393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.461420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.461449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.461478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.461513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.461552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.461576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.461607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.461637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.461670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.461699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.461727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.461758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.461788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.461817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.461853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.461884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.461917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.461945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.461975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.462002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.462032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.462064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.462092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.462120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.462155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.462181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.462214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.462242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.462269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.462294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.462327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.462355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.462384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.462415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.462446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.462818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.462848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.462873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.462902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.462933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.462962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.462993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.463021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.463054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.463084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.463115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.463148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.463176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.463211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.463239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.463268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.665 [2024-07-24 19:47:12.463299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.463329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.463357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.463392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.463420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.463450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.463478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.463510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.463548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.463575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.463608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.463634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.463668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:24.666 [2024-07-24 19:47:12.463697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.463726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.463755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.463792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.463825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.463852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.463881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.463910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.463939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.463967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.463993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.464030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.464061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.464089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.464117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.464143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.464175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.464211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.464243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.464272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.464302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.464331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.464365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.464394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.464422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.464453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.464482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.464515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.464547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.464576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.464628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.464661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.464689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.464716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.464745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.465166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.465197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.465229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.465258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.465288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.465320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.465347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.465381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.465410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.465470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.465499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.465528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.465559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.465588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.465645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.465674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.465707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.465738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.465767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.465794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.465823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.465868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.465896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.465923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.465954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.465983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.466011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.466041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.466071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.466102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.466132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.466161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.466187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.466225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.466252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.466284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.466315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.466344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.466372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.466404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.466441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.466472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.466500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.466529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.466557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.466589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.466620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.466648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.466677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.666 [2024-07-24 19:47:12.466707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.466740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.466769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.466795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.466824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.466853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.466883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.466913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.466944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.466973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.467000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.467036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.467067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.467099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.467451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.467483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.467515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.467547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.467576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.467606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.467636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.467666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.467699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.467727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.467756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.467787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.467816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.467847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.467877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.467911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.467942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.467971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.468001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.468028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.468057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.468089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.468117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.468158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.468186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.468220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.468247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.468278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.468309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.468339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.468369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.468402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.468430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.468462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.468494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.468525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.468555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.468582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.468611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.468635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.468667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.468698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.468725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.468754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.468786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.468822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.468849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.468876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.468904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.468932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.468957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.667 [2024-07-24 19:47:12.468986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.469017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.469045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.469073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.469102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.469131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.469158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.469188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.469220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.469249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.469279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.469309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.469374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.469735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.469763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.469793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.469825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.469865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.469896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.469924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.469951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.469976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.470007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.470035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.470061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.470089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.470118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.470148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.470178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.470214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.470245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.470276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.470304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.470332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.470359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.470393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.470422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.470450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.470478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.470509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.470537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.470567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.470595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.470625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.470653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.470685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.470714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.470744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.470774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.470805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.470833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.470863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.470894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.470928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.470957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.470989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.471017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.471047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.471074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.471102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.471128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.471155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.471184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.471218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.471252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.471282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.471310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.471337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.471365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.471395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.471424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.471449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.471479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.471506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.471539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.471568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.471936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.471966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.471997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.472025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.668 [2024-07-24 19:47:12.472055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.472086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.472115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.472145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.472172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.472206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.472235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.472264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.472288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.472315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.472343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.472372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.472403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.472430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.472463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.472494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.472526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.472554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.472583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.472614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.472642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.472668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.472698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.472728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.472760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.472790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.472822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.472862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.472892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.472923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.472953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.472984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.473011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.473042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.473070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.473098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.473127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.473156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.473197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.473230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.473259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.473289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.473316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.473344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.473373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.473401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.473434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.473464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.473495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.473525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.473563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.473596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.473626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.473655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.473682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.473712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.473740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.473767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.473797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.473827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.474186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.474221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.474274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.474306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.474337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.474382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.474412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.474441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.474471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.474501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.474550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.474580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.474611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.474640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.474669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.474699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.474727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.474754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.474789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.474819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.474844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.474875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.474904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.474937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.474974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.475003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.475031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.475062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.475092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.475123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.475153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.475183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.475216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.475249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.475280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.475309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.669 [2024-07-24 19:47:12.475342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.475373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.475403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.475451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.475484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.475514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.475544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.475574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.475617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.475646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.475676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.475707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.475736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.475772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.475802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.475832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.475863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.475898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.475929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.475959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.475989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.476018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.476047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.476074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.476103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.476131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.476162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.476533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.476567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.476594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.476623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.476652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.476682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.476714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.476744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.476772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.476802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.476830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.476859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.476889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.476925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.476953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.476990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.477018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.477046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.477080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.477108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.477145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.477170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.477198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.477239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.477267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.477297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.477321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.477350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.477380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.477409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.477438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.477465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.477498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.477527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.477556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.477587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.477615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.477647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.477676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.477704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.477735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.477768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.477798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.477827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.477856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.477883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.477917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.477946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.477980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.478006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.478034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.478061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.478092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.478124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.478152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.478181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.478218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.478249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.478276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.478306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.478336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.478364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.478406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.478438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.478882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.478914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.478944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.478974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.479005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.479033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.670 [2024-07-24 19:47:12.479066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.479097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.479129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.479159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.479187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.479221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.479253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.479287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.479315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.479346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.479378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.479410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.479438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.479471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.479501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.479531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.479560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.479588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.479619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.479650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.479681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.479709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.479750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.479784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.479812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.479840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.479870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.479898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.479923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.479956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.479988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.480021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.480056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.480087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.480116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.480143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.480171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.480198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.480234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.480263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.480291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.480321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.480347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.480376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.480408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.480437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.480468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.480496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.480551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.480581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.480615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.480645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.480673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.480703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.480731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.480765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.480793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.481152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.481182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.481216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.481247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.481272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.481302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.481329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.481356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.481384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.481414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.481449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.481480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.481511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.481540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.481569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.481596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.481627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.481653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.481681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.481715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.481745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.481774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.481811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.481841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.481869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.481897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.481927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.481957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.481986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.482021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.482050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.482081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.482113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.482143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.482172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.482205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.482233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.482268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.482306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.482335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.671 [2024-07-24 19:47:12.482361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.482391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.482420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.482448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.482473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.482499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.482525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.482555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.482589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.482622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.482652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.482682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.482714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.482745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.482776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.482807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.482835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.482863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.482890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.482942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.482975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.483006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.483036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.483066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.483417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.483448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.483476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.483504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.483537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.483561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.483592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.483621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.483663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.483692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.483722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.483750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.483779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.483817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.483845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.483874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.483905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.483934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.483964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.483991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.484019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.484049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.484082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.484111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.484154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.484185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.484218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.484247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.484277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.484306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.484336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.484368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.484403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.484429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.484468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.484497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.484531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.484559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.484588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.484621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.484650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.484679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.484708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.484740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.484771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.484799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.484826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.484855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.484888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.484925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.484949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.484979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.485009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.485038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.485066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.485096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.485125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.485157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.485184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.672 [2024-07-24 19:47:12.485218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.485246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.485274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.485301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.485658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.485689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.485720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.485750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.485781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.485807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.485836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.485865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.485898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.485927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.485959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.485987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.486024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.486054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.486084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.486111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.486138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.486162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.486196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.486231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.486267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.486297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.486324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.486357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.486385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.486416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.486444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.486478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.486504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.486535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.486566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.486597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.486626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.486654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.486684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.486713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.486740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.486769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.486811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.486842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.486870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.486898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.486929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.486963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.486990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.487019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.487048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.487078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.487110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.487141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.487171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.487213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.487245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.487278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.487312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.487341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.487376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.487405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.487437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.487466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.487494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.487523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.487553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.487583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.487918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.487972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.488003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.488031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.488059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.488088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.488136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.488166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.488206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.488235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.488263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.488288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.488316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.488345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.488374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.488398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.488430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.488459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.488499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.488529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.488558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.488587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.488616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.488644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.488674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.488709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.488744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.673 [2024-07-24 19:47:12.488779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.488803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.488834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.488865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.488897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.488929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.488959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.488990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.489020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.489050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.489080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.489108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.489141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.489171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.489204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.489234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.489269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.489297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.489362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.489390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.489422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.489456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.489484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.489536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.489567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.489596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.489629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.489659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.489687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.489717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.489744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.489771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.489799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.489830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.489859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.489891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.490296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.490327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.490357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.490386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.490413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.490447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.490480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.490508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.490537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.490565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.490601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.490630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.490671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.490701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.490730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.490759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.490789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.490821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.490851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.490882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.490910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.490941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.490970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.490999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.491043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.491070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.491100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.491127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.491157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.491196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.491230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.491257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.491287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.491320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.491349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.491378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.491406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.491435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.491467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.491495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.491523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.491550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.491574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.491604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.491636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.491669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.491698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.491727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.491756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.491788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.491818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.491846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.491875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.491903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.491937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.491967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.491995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.492025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.492053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.492083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.492111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.674 [2024-07-24 19:47:12.492140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.492169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.492198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.492545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.492573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.492607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.492638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.492666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.492697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.492726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.492754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.492781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.492814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.492841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.492870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.492899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.492928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.492957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.492987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.493030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.493059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.493088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.493116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.493148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.493179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.493211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.493240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.493269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.493298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.493335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.493366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.493390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.493422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.493452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.493479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.493508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.493542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.493578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.493613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.493642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.493674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.493704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.493734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.493764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.493793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.493824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.493850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.493881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.493910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.493942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.493972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.494000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.494046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.494076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.494107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.494139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.494168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.494195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.494228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.494256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.494291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.494318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.494351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.494381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.494411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.494443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.494812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.494843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.494876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.494906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.494935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.494969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.494998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.495025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.495050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.495081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.495106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.495135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.495168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.495197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.495230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.495261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.495290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.495330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.495362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.495389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.495418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.495448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.495478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.495509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.495539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.495567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.495596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.495624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.495655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.495684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.495713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.495742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.675 [2024-07-24 19:47:12.495774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.495803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.495833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.495864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.495892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.495926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.495957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.495984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.496020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.496048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.496097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.496124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.496153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.496182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.496216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.496247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.496274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.496303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.496332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.496364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.496405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.496430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.496461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.496492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.496520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.496550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.496581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.496614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.496645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.496673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.496704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.496734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.497088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.497117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.497146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.497175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.497209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.497240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.497268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.497296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.497324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.497352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.497394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.497423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.497452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.497482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.497509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.497542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.497571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.497612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.497641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.497670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.497704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.497736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.497793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.497823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.497855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.497884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.497913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.497941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.497967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.497992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.498021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.498049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.498077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.498112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.498151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.498186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.498216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.498244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.498273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.498301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.498336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.498365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.676 [2024-07-24 19:47:12.498390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.498422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.498453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.498487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.498517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.498549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.498578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.498604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.498632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.498663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.498696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.498722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.498746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.498770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.498795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.498820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.498848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.498882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.498915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.498942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.498972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.499344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.499374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.499403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.499437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.499466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.499507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.499539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.499568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.499599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.499630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.499657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.499685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.499724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.499760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.499788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.499816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.499844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.499871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.499898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.499928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.499956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.499984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.500008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.500036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:24.677 [2024-07-24 19:47:12.500068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.500101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.500132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.500164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.500193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.500227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.500258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.500287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.500317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.500348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.500377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.500407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.500436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.500466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.500494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.500525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.500554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.500590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.500617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.500649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.500685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.500719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.500747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.500771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.500803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.500831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.500861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.500891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.500922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.500949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.500976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.501004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.677 [2024-07-24 19:47:12.501034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.501065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.501094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.501122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.501151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.501181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.501215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.501246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.501592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.501621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.501662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.501692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.501721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.501749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.501779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.501806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.501834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.501862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.501894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.501921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.501954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.501983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.502013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.502046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.502072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.502101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.502129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.502159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.502191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.502225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.502263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.502292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.502326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.502354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.502384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.502422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.502450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.502480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.502510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.502538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.502567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.502598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.502628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.502656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.502685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.502719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.502748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.502781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.502811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.502839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.502869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.502896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.502932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.502961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.502991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.503019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.503048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.503077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.503106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.503133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.503161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.503195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.503228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.503256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.503285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.503317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.503344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.503373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.503413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.503441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.503472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.503983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.504013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.504044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.504072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.504101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.504129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.504159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.504189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.504229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.504267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.504296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.504359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.504391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.504421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.504450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.504480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.504510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.504539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.504573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.504604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.504638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.504668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.504695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.504732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.678 [2024-07-24 19:47:12.504759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.504790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.504819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.504847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.504877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.504907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.504939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.504969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.505002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.505029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.505058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.505086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.505114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.505140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.505164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.505194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.505230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.505261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.505287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.505323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.505363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.505393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.505433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.505467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.505500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.505535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.505563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.505588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.505618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.505659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.505690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.505718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.505745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.505771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.505802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.505833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.505863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.505894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.505928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.505956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.506320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.506349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.506379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.506410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.506440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.506466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.506493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.506517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.506550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.506582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.506615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.506644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.506673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.506704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.506732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.506767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.506795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.506827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.506857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.506885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.506917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.506946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.506979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.507008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.507040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.507070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.507100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.507139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.507167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.507198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.507231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.507265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.507327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.507355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.507384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.507414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.507441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.507469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.507507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.507542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.507573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.507599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.507626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.507653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.507681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.507708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.507736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.507766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.507793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.507823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.507854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.507884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.507915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.507942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.507971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.508001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.679 [2024-07-24 19:47:12.508026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.508055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.508115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.508145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.508174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.508211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.508240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.508590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.508623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.508664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.508692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.508726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.508755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.508782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.508807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.508844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.508872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.508902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.508931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.508956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.508986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.509014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.509046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.509082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.509118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.509148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.509176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.509214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.509243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.509274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.509303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.509334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.509361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.509389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.509424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.509451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.509479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.509507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.509556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.509584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.509613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.509645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.509675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.509715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.509744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.509774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.509804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.509833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.509869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.509896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.509923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.509953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.509982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.510010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.510035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.510064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.510098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.510126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.510154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.510183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.510215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.510246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.510277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.510307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.510339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.510373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.510399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.510435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.510463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.510495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.510526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.510878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.510915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.510945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.510977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.511005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.511033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.511062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.511090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.511118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.511146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.511174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.511211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.511238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.511267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.511293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.511322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.511350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.511387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.511415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.511444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.511472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.511500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.511533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.511564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.511594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.680 [2024-07-24 19:47:12.511626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.511657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.511685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.511714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.511748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.511777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.511807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.511836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.511864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.511891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.511920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.511950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.511982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.512015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.512045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.512076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.512103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.512134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.512165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.512208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.512236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.512265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.512294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.512324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.512353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.512382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.512410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.512434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.512464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.512499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.512534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.512563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.512591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.512620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.512647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.512676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.512704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.512740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.513112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.513143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.513172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.513207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.513235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.513268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.513302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.513330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.513358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.513386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.513410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.513444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.513471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.513500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.513529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.513559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.513589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.513616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.513645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.513673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.513704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.513734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.513768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.513796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.513828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.513856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.513886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.513913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.513942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.513970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.514001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.514035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.514063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.514093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.514123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.514152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.514197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.514231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.514260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.514290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.514317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.514352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.514380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.514410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.514439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.514475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.514505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.514533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.514562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.514590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.514619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.514647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.514674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.514703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.514730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.514760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.514785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.514813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.514841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.514870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.681 [2024-07-24 19:47:12.514903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.514931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.514963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.514997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.515348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.515385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.515415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.515448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.515479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.515508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.515540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.515570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.515598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.515628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.515659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.515688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.515717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.515745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.515775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.515803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.515834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.515863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.515897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.515927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.515956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.515987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.516020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.516046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.516076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.516103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.516132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.516160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.516188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.516220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.516260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.516289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.516313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.516342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.516374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.516404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.516434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.516464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.516494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.516525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.516555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.516580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.516615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.516643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.516694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.516725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.516756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.516788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.516815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.516848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.516878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.516909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.516940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.516970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.517018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.517049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.517079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.517112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.517140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.517170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.517203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.517232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.517262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.517613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.517645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.517673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.517700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.517731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.517761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.517792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.517825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.517853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.517885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.517915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.682 [2024-07-24 19:47:12.517944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.517973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.518002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.518033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.518062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.518092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.518121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.518148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.518177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.518206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.518234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.518262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.518288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.518322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.518352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.518385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.518418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.518449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.518481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.518510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.518540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.518567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.518592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.518623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.518653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.518683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.518711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.518735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.518758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.518782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.518806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.518837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.518865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.518898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.518929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.518960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.518990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.519015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.519049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.519078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.519110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.519139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.519169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.519196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.519228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.519257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.519289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.519319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.519348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.519377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.519408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.519445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.519472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.519827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.519858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.519882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.519909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.519938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.519970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.520000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.520025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.520054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.520084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.520109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.520134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.520158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.520185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.520219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.520249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.520278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.520550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.520579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.520610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.520641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.520668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.520702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.520733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.520763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.520818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.520848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.520877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.520906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.520935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.520967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.520999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.521026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.521057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.521086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.683 [2024-07-24 19:47:12.521119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.521149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.521179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.521214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.521247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.521278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.521304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.521333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.521363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.521398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.521433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.521463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.521491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.521516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.521547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.521575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.521605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.521644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.521678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.521704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.521735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.521766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.521796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.521827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.521852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.521881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.521912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.521942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.522112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.522142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.522174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.522206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.522234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.522264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.522291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.522330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.522359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.522387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.522416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.522471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.522504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.522533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.522560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.522589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.522624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.522654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.522689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.522717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.522746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.522771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.522802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.522834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.522864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.522905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.522935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.522963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.522990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.523019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.523051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.523088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.523121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.523150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.523174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.523209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.523238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.523266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.523299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.523329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.523359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.523384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.523414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.523443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.523473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.523501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.523532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.523564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.523594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.523627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.523655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.523683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.523712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.523749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.523778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.523811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.523841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.684 [2024-07-24 19:47:12.523872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.523906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.523935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.523981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.524013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.524041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.524071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.524205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.524234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.524264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.524291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.524323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.524352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.524388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.524416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.524443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.524479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.524516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.524549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.524578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.524606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.524631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.524658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.524686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.525258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.525291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.525323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.525352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.525382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.525414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.525442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.525477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.525507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.525534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.525564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.525593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.525622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.525646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.525677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.525706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.525734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.525762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.525789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.525817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.525847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.525876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.525906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.525937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.525964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.525993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.526020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.526050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.526078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.526106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.526145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.526177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.526210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.526246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.526273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.526310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.526341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.526368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.526395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.526428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.526457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.526485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.526514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.526543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.526572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.526603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.526635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.526666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.526697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.526724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.526755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.526785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.526815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.685 [2024-07-24 19:47:12.526842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.526872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.526903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.526932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.526965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.527008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.527039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.527070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.527101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.527136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.527385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.527424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.527455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.527483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.527510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.527539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.527574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.527601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.527631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.527660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.527689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.527718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.527750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.527780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.527824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.527853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.527883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.527918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.527947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.527977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.528006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.528035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.528068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.528094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.528152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.528184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.528215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.528244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.528272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.528298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.528326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.528354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.528383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.528412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.528439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.528471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.528500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.528533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.528566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.528596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.528622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.528649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.528677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.528704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.528733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.528760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.528786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.528816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.528843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.528873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.528903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.528934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.528962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.528992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.529020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.529049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.529094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.529122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.529154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.529182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.529215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.529248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.529281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.529310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.529666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.529707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.529741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.529773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.529807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.529836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.529864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.529892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.529917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.529944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.529973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.530010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.686 [2024-07-24 19:47:12.530043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.530071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.530101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.530131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.530156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.530187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.530222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.530252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.530281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.530312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.530342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.530369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.530398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.530425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.530454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.530484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.530512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.530543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.530571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.530603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.530631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.530659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.530690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.530721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.530754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.530783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.530812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.530849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.530881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.530911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.530940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.530969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.530998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.531025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.531056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.531087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.531115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.531144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.531173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.531206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.531238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.531265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.531293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.531324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.531353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.531382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.531411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.531439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.531468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.531501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.531531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.531889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.531923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.687 [2024-07-24 19:47:12.531954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.531988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.532020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.532051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.532078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.532108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.532135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.532193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.532226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.532255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.532284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.532312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.532344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.532373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.532401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.532436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.532469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.532498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.532529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.532557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.532598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.532625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.532656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.532689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.532716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.532758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.532787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.532815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.532850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.532880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.532910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.532940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.532970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.533000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.533040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.533070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.533095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.533126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.533154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.533179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.533212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.533243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.533271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.533301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.533330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.533370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.533404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.533432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.533462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.533487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.533520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.533549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.533578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.533608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.533640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.533671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.533698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.533727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.533755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.533784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.533811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.533840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.534190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.534224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.534253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.534304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.534337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.534365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.534419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.534449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.534480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.534510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.534543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.534578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.534608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.534637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.534667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.534695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.534723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.534752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.534781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.688 [2024-07-24 19:47:12.534820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.534852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.534882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.534916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.534951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.534978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.535008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.535038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.535068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:24.689 [2024-07-24 19:47:12.535097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.535127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.535167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.535197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.535231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.535260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.535291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.535321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.535350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.535381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.535412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.535445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.535477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.535508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.535540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.535574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.535605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.535633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.535666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.535695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.535722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.535756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.535785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.535821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.535853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.535885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.535921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.535953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.535980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.536013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.536044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.536074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.536103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.536133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.536164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.536527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.536559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.536595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.536627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.536652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.536683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.536714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.536744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.536777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.536805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.536831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.536861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.536893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.536924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.536957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.536987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.537020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.537050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.537076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.537104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.537128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.537153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.537186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.537216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.537245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.537273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.537298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.537324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.537348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.537373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.537397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.537421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.537445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.537475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.537505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.537538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.537564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.537588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.537613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.537637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.537662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.537688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.537711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.537736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.537760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.537785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.537810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.537835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.689 [2024-07-24 19:47:12.537868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.537897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.537925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.537954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.537982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.538011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.538040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.538067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.538098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.538130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.538161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.538192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.538225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.538254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.538287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.538318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.538658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.538694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.538731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.538760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.538791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.538823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.538856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.538884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.538911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.538942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.538969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.538994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.539020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.539044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.539068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.539094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.539125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.539414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.539450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.539482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.539511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.539562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.539591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.539622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.539654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.539683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.539714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.539749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.539778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.539809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.539839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.539868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.539918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.539951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.539978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.540010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.540039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.540074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.540106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.540135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.540166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.540204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.540244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.540274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.540302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.540338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.540369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.540401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.540433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.540462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.540490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.540517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.540546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.540576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.540604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.540645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.540673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.540702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.540732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.540767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.540792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.540823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.540854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.541025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.541055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.541083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.541120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.541146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.541181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.541211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.541244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.541274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.541304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.541334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.541362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.541393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.541427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.541458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.541488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.541532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.690 [2024-07-24 19:47:12.541564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.541592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.541622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.541653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.541683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.541711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.541740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.541783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.541810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.541842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.541870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.541898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.541926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.541954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.541982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.542019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.542043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.542075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.542107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.542138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.542167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.542195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.542235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.542263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.542293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.542322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.542353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.542385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.542416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.542446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.542473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.542501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.542525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.542550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.542574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.542599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.542623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.542655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.542687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.542721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.542751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.542783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.542813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.542840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.542868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.542899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.542928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.543059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.543092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.543120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.543152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.543180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.543214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.543243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.543282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.543310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.543335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.543365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.543398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.543428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.543458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.543489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.543524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.543556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.543984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.544022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.544052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.544082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.544116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.544148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.544179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.544215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.544243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.544273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.544307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.544339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.544370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.544397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.544421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.544452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.544481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.544518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.544554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.544585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.544615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.544646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.544677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.544708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.544739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.544769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.544802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.544830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.544863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.544895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.544927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.544956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.544986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.545027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.691 [2024-07-24 19:47:12.545057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.545086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.545116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.545147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.545181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.545213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.545242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.545273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.545301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.545335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.545364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.545392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.545564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.545595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.545628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.545657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.545688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.545716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.545744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.545773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.545803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.545830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.545861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.545890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.545919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.545949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.545987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.546015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.546060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.546088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.546120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.546149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.546178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.546210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.546239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.546276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.546305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.546337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.546367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.546397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.546424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.546455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.546485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.546512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.546546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.546577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.546606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.546653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.546682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.546713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.546743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.546772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.546803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.546832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.546861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.546890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.546919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.546950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.546980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.547009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.547048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.547079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.547109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.547143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.547170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.547204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.547231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.547264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.547294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.547324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.547353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.547382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.547423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.547450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.547478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.547507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.547803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.547836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.547865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.547894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.692 [2024-07-24 19:47:12.547929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.547957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.547986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.548015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.548051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.548080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.548108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.548135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.548163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.548192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.548226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.548255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.548286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.548317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.548351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.548380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.548413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.548443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.548473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.548503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.548531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.548560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.548592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.548622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.548651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.548681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.548711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.548742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.548773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.548803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.548832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.548867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.548897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.548925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.548955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.548987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.549022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.549052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.549082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.549111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.549147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.549180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.549214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.549242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.549270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.549304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.549335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.549363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.549398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.549435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.549468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.549494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.549526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.549559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.549588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.549617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.549647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.549676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.549707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.550053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.550098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.550129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.550159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.550187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.550220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.550248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.550286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.550316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.550344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.550372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.550399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.550431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.550461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.550491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.550519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.550548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.550577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.550610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.550639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.550673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.550702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.550730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.550770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.550803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.550831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.550863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.550892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.550923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.550953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.550981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.551011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.551043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.551073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.551120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.551147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.551182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.551217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.693 [2024-07-24 19:47:12.551246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.551274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.551309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.551339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.551372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.551399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.551428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.551457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.551489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.551519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.551550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.551580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.551611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.551646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.551679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.551709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.551740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.551769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.551796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.551826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.551855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.551889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.551917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.551947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.551978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.552008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.552353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.552385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.552410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.552441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.552474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.552503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.552531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.552564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.552596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.552626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.552656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.552686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.552719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.552749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.552785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.552814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.552849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.552881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.552910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.552944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.552975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.553004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.553056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.553087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.553117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.553173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.553207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.553236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.553269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.553299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.553327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.553357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.553387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.553418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.553446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.553474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.553510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.553538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.553568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.553598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.553627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.553659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.694 [2024-07-24 19:47:12.553690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.971 [2024-07-24 19:47:12.553719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.971 [2024-07-24 19:47:12.553750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.971 [2024-07-24 19:47:12.553785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.971 [2024-07-24 19:47:12.553814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.971 [2024-07-24 19:47:12.553847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.553877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.553915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.553948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.553975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.554009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.554041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.554072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.554100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.554142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.554174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.554205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.554240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.554268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.554298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.554329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.554706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.554737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.554782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.554811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.554845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.554878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.554909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.554952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.554982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.555010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.555039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.555078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.555108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.555135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.555164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.555195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.555236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.555266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.555294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.555323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.555354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.555383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.555416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.555448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.555481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.555511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.555541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.555568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.555597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.555626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.555659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.555687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.555716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.555771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.555799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.555828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.555862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.555890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.555927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.555956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.555987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.556030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.556060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.556088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.556133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.556165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.556193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.556229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.556263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.556289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.556318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.556354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.556388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.556418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.556445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.556475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.556507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.556539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.556568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.556600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.556631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.556660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.556690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.556719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.557066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.557115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.557143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.557172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.557208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.557243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.557276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.557306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.557332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.557360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.557397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.557423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.557450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.557480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.557508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.557536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.557564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.972 [2024-07-24 19:47:12.557594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.557625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.557657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.557687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.557716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.557748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.557777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.557807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.557836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.557870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.557904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.557931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.557962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.557994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.558022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.558052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.558086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.558115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.558142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.558172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.558206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.558236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.558266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.558296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.558326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.558356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.558387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.558418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.558448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.558478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.558515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.558544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.558573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.558602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.558632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.558661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.558700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.558731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.558761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.558789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.558816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.558845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.558873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.558903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.558943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.558972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.559376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.559414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.559442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.559475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.559504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.559534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.559565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.559595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.559626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.559657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.559685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.559715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.559744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.559789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.559817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.559847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.559878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.559908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.559939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.559966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.559996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.560036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.560071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.560100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.560128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.560156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.560184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.560223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.560252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.560283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.560311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.560339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.560370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.560397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.560426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.560457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.560487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.560516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.560548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.560577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.560608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.560640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.560673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.560702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.560732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.560765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.560792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.560823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.560854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.560884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.560913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.973 [2024-07-24 19:47:12.560944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.561009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.561038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.561069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.561098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.561128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.561159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.561186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.561218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.561251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.561282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.561311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.561350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.561802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.561838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.561869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.561901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.561930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.561959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.561991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.562019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.562050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.562081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.562110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.562137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.562166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.562197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.562234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.562266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.562299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.562328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.562357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.562384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.562413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.562444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.562473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.562501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.562530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.562560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.562589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.562618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.562647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.562675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.562709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.562738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.562766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.562795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.562824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.562855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.562885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.562913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.562947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.562977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.563008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.563041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.563070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.563101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.563130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.563158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.563195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.563230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.563262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.563292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.563319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.563348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.563379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.563413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.563440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.563470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.563501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.563535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.563566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.563597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.563626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.563653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.563682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.564050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.564082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.564114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.564143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.564174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.564210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.564242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.564273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.564304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.564335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.564362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.564398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.564428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.564458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.564501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.564532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.564561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.564591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.564622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.564653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.564686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.974 [2024-07-24 19:47:12.564715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.564743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.564778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.564810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.564840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.564870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.564902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.564927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.564959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.564990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.565018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.565063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.565092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.565121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.565151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.565187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.565225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.565254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.565283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.565307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.565336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.565366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.565394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.565426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.565457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.565485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.565515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.565545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.565575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.565604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.565634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.565662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.565693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.565725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.565754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.565783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.565813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.565849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.565875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.565905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.565944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.565974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.566001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.566336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.566373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.566406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.566443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.566477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.566508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.566537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.566567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.566595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.566623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.566657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.566690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.566718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.566747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.566779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.566811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.566843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.566873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.566901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.566930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.566961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.566990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.567018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.567046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.567080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.567118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.567149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.567179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.567213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.567243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.567268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.567301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.567333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.567364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.567402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.567435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.567465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.567497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.567526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.567560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.567586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.567619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.567649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.567679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.567710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.567739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.567768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.567794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.567822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.567855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.567886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.567913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.567944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.975 [2024-07-24 19:47:12.567976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.568006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.568042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.568071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.568101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.568133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.568162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.568191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.568224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.568261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.568877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.568911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.568940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.568976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.569010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.569040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.569068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.569099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.569128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.569178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.569212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.569243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.569292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.569323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.569354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.569397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.569427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.569459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.569497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.569527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.569555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.569599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.569628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.569661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.569694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.569724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.569753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.569783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.569810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.569838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.569868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.569899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.569928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.569955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.569984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.570014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.570045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.570075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.570104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.570133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.570160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.570196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.570226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.570257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.570290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.570321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.570350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.570378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.570408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.570443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.570475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.570504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.570532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.570558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.570592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.570625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.570657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.570685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.570717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.570745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.570776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.570806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.570836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.570863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.976 [2024-07-24 19:47:12.571229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.571264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.571293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.571323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.571350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.571380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.571408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.571441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.571466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.571495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.571524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.571556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.571586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.571615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.571648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.571679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:24.977 [2024-07-24 19:47:12.571709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.571739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.571770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.571806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.571835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.571866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.571895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.571922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.571950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.571981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.572013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.572045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.572075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.572101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.572133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.572161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.572196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.572233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.572263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.572322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.572352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.572383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.572423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.572454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.572483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.572513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.572544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.572574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.572602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.572633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.572660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.572688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.572715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.572749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.572777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.572806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.572838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.572869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.572900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.572930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.572959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.572991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.573019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.573047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.573079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.573113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.573147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.573509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.573543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.573573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.573600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.573627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.573658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.573692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.573725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.573759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.573789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.573820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.573850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.573880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.573912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.573943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.573976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.574009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.574039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.574072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.574102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.574130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.574162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.574193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.574228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.574259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.574291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.574321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.574355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.574384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.574413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.574448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.574477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.574506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.574536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.977 [2024-07-24 19:47:12.574569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.574600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.574635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.574665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.574694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.574731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.574762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.574794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.574828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.574860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.574888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.574921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.574952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.574983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.575011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.575048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.575079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.575106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.575135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.575162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.575193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.575226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.575256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.575286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.575316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.575344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.575374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.575405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.575436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.575469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.575816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.575854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.575886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.575916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.575950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.575979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.576008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.576041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.576072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.576103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.576132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.576163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.576193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.576227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.576263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.576291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.576321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.576351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.576377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.576406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.576437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.576467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.576495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.576525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.576555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.576584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.576612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.576646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.576677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.576709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.576738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.576767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.576794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.576825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.576856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.576883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.576914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.576943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.576973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.577003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.577030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.577066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.577094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.577122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.577173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.577206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.577236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.577267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.577297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.577328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.577365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.577392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.577422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.577449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.577485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.577514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.577545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.577577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.577604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.577637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.577668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.577698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.577729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.578102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.578133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.578169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.578198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.578234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.978 [2024-07-24 19:47:12.578266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.578295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.578325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.578361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.578388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.578413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.578442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.578470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.578498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.578526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.578555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.578583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.578611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.578644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.578677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.578706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.578737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.578766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.578797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.578826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.578855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.578884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.578909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.578937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.578968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.578997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.579027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.579054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.579083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.579113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.579144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.579176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.579205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.579236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.579270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.579294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.579326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.579354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.579383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.579416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.579449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.579477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.579518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.579549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.579580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.579609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.579638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.579672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.579701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.579733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.579762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.579790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.579821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.579852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.579900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.579929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.579961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.579988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.580016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.580372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.580401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.580431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.580462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.580494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.580523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.580565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.580595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.580626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.580653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.580683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.580716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.580743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.580775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.580806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.580837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.580885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.580914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.580942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.580972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.581002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.581032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.581062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.581100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.581130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.581157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.581186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.581219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.581243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.581273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.581306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.581333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.581376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.581405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.581439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.581466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.581499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.581537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.581568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.979 [2024-07-24 19:47:12.581596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.581628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.581656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.581689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.581719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.581749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.581789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.581818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.581851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.581880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.581910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.581963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.582339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.582378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.582409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.582437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.582467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.582494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.582524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.582555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.582583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.582609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.582645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.582672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.582703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.582734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.582765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.582796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.582826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.582862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.582895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.582925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.582955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.582985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.583020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.583048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.583078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.583116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.583143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.583175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.583212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.583241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.583279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.583307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.583335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.583369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.583400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.583430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.583459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.583488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.583524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.583553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.583584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.583614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.583645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.583677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.583704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.583733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.584022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.584051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.584081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.584109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.584140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.584168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.584199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.584232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.584257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.584281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.584305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.584329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.584353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.584384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.584418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.584448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.584478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.584505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.584535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.584564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.584589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.584614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.584638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.584663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.584693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.584721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.584751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.584783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.584811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.584853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.584884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.584913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.584942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.584973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.585002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.585035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.585064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.585093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.585120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.980 [2024-07-24 19:47:12.585151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.585178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.585211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.585239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.585267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.585296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.585326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.585357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.585389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.585415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.585448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.585472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.585496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.585521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.585545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.585570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.585595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.585622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.585650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.585704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.585734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.585765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.585795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.585826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.585881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.586017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.586047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.586106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.586135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.586165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.586196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.586227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.586256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.586284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.586310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.586339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.586375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.586404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.586435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.586464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.586493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.586532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.587081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.587114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.587144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.587174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.587208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.587261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.587291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.587322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.587352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.587380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.587416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.587447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.587476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.587530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.587561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.587591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.587621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.587650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.587681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.587711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.587744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.587777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.587807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.587842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.587871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.587900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.587927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.587953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.587981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.588010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 true 00:07:24.981 [2024-07-24 19:47:12.588042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.588073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.588106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.588146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.588180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.588213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.588241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.981 [2024-07-24 19:47:12.588271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.588301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.588329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.588357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.588387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.588428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.588459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.588487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.588513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.588709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.588742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.588787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.588818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.588850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.588883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.588911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.588942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.588974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.589006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.589038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.589068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.589105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.589133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.589164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.589205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.589235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.589266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.589296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.589325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.589352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.589384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.589408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.589438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.589469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.589506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.589540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.589569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.589596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.589623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.589653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.589685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.589720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.589749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.589779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.589807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.589837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.589881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.589908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.589945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.589976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.590006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.590035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.590062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.590093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.590124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.590167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.590195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.590229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.590257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.590285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.590318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.590348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.590376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.590402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.590430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.590463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.590494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.590522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.590549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.590577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.590606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.590641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.590671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.590807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.590838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.590868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.590899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.590927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.590956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.590990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.591019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.591051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.591079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.591109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.591138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.591168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.591198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.591232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.591259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.591290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.591776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.591812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.591843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.591875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.591907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.591937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.591965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.982 [2024-07-24 19:47:12.591995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.592048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.592077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.592108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.592140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.592169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.592199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.592236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.592265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.592319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.592348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.592380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.592410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.592443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.592477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.592506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.592537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.592565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.592596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.592626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.592657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.592683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.592718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.592747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.592777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.592811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.592844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.592875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.592926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.592954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.592988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.593025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.593053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.593082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.593111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.593137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.593163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.593191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.593222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.593254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.593291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.593331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.593361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.593390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.593415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.593444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.593472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.593498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.593527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.593562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.593597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.593627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.593656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.593687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.593717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.593747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.593965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.593997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.594028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.594072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.594104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.594134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.594168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.594205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.594234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.594266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.594296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.594331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.594358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.594388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.594417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.594446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.594475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.594502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.594532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.594564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.594591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.594621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.594656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.594685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.594711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.594741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.594771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.594801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.594832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.594860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.594888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.594916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.594946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.594977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.595014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.595044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.595084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.595113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.595142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.595174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.983 [2024-07-24 19:47:12.595206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.595236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.595266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.595293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.595322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.595346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.595376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.595406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.595436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.595462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.595489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.595520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.595551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.595581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.595611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.595641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.595671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.595726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.595754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.595784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.595813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.595841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.595873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.595906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.596250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.596281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.596308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.596341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.596370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.596400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.596434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.596467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.596497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.596526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.596555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.596583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.596614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.596643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.596677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.596709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.596736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.596767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.596797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.596827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.596856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.596883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.596911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.596944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.596974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.597003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.597035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.597063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.597094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.597123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.597167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.597193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.597226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.597257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.597285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.597319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.597349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.597380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.597413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.597442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.597471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.597499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.597531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.597561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.597591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.597620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.597648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.597678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.597712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.597740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.597771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.597797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.597828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.597867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.597896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.597927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.597957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.597986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.598016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.598047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.598076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.598106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.598137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.598477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.598506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.598535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.598561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.598589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.598618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.598647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.598679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.598712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.598742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.598774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.984 [2024-07-24 19:47:12.598802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.598831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.598860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.598891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.598922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.598951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.598981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.599011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.599040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.599068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.599096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.599124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.599154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.599183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.599234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.599265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.599294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.599327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.599356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.599389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.599417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.599446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.599474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.599503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.599531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.599560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.599589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.599618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.599645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.599672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.599701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.599743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.599776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.599807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.599837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.599867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.599892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.599919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.599953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.599983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.600011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.600039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.600071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.600103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.600135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.600170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.600206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.600237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.600268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.600297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.600327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.600360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.600391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.600738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.600770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.600807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.600837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.600862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.600892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.600921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.600951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.600987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.601015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.601046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.601081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.601110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.601138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.601171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.601205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.601237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.601267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.601297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.601326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.601355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.601384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.601412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.601449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.601477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.985 [2024-07-24 19:47:12.601505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.601535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.601565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.601593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.601618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.601651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.601682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.601717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.601749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.601775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.601800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.601825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.601848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.601880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.601908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.601937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.601965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.601994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.602028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.602054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.602080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.602104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.602127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.602152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.602176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.602204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.602228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.602255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.602287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.602316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.602347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.602379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.602410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.602439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.602463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.602491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.602515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.602540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.602879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.602908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.602937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.602972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.603000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.603031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.603060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.603088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.603116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.603146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.603176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.603210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.603243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.603281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.603311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.603341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.603370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.603399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.603423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.603452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.603485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.603516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.603543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.603571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.603607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.603632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.603662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.603690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.603727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.603756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.603784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.603815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.603845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.603875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.603905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.603934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.603966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.603992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.604021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.604046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.604075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.604104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.604133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.604161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.604191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.604227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.604258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.604287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.604317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.604347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.604380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.604706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.604737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.604768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.604796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.604824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.604853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.604913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.604942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.986 [2024-07-24 19:47:12.604972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.605001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.605032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.605062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.605091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.605126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.605161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.605191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.605224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.605256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.605291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.605325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.605351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.605381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.605415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.605444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.605478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.605508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.605534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.605563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.605600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.605629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.605659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.605691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.605720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.605749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.605779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.605824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.605853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.605880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.605909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.605941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.605976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.606006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.606035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.606066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.606092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.606149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.606178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.606214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.606248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.606276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.606324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.606355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.606386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.606417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.606445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.606475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.606505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.606531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.606563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.606596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.606628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.606657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.606684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.606714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.607041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.607067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.607091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.607115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.607138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.607168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.607197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.607231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.607264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.607288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.607312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.607337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.607362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.607386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.607410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.607433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.607457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.607485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.607512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.607541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.607570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.607602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.607634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.607663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.607695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.607724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.607755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.607783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.607812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.607852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.607882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.607911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.607940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.607969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.607998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.608027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.608057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.608093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.608119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.608151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.608179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.987 [2024-07-24 19:47:12.608214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.608245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.608276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.608305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.608335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.608368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.608398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.608426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.608453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.608483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.608515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.608542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.608575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.608606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.608635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.608665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.608695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.608726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.608755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.608784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.608839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.608869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:24.988 [2024-07-24 19:47:12.609206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.609239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.609271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.609307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.609338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.609365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.609393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.609422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.609452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.609481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.609511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.609540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.609593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.609625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.609659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.609689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.609720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.609749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.609792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.609824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.609856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.609885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.609919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.609950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.609980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.610008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.610036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.610066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.610093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.610122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.610156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.610198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.610231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.610259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.610287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.610314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.610345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.610377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.610417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.610448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.610477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.610508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.610535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.610565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.610594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.610620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.610652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.610680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.610717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.610748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.610777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.611259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.611288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.611317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.611348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.611378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.611407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.611433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.611462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.611491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.611523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.611550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.611580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.611609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.611641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.611672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.611705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.611734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.611764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.611800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.611829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.611859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.611889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.611921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.611950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.988 [2024-07-24 19:47:12.611976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.612006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.612034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.612060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.612093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.612123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.612158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.612187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.612217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.612246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.612274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.612308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.612340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.612371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.612399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.612429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.612458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.612489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.612515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.612543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.612574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.612605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.612640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.612670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.612698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.612727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.612756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.612785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.612811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.612846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.612874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.612903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.612931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.612962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.612993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.613024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.613053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.613079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.613108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.613141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.613292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.613323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.613354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.613387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.613415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.613447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.613476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.613504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.613534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.613563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.613592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.613621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.613861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.613894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.613923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.613959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.613986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.614019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.614053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.614082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.614110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.614139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.614168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.614198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.614233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.614272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.614302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.614331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.614361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.614387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.614419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.614450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.614482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.614514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.614544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.614573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.614609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.614637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.614667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.614699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.614728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.614757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 19:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:24.989 [2024-07-24 19:47:12.614789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.614817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.614850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.614878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.614908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.614954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.614986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.615014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.615042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.615072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.615132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 19:47:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.989 [2024-07-24 19:47:12.615162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.615195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.989 [2024-07-24 19:47:12.615236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.615266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.615295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.615324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.615351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.615382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.615409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.615439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.615795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.615832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.615860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.615891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.615918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.615954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.615994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.616023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.616056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.616087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.616119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.616150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.616177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.616212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.616243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.616270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.616302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.616333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.616361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.616392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.616420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.616452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.616482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.616513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.616545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.616575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.616604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.616632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.616660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.616688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.616718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.616753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.616780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.616810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.616839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.616868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.616896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.616925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.616953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.616981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.617011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.617039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.617068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.617097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.617125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.617154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.617192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.617229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.617268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.617294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.617325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.617356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.617385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.617413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.617445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.617477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.617506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.617534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.617563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.617592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.617625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.617656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.617685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.617714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.617849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.617876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.617904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.617929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.617962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.617991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.618018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.618053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.618084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.990 [2024-07-24 19:47:12.618114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.618143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.618175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.618693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.618731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.618761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.618790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.618821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.618853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.618881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.618910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.618941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.618979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.619013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.619041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.619070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.619099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.619125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.619153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.619183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.619215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.619251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.619288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.619318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.619346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.619378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.619409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.619441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.619472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.619501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.619530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.619557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.619586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.619615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.619642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.619674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.619703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.619733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.619764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.619793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.619825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.619860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.619888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.619919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.619948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.619983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.620013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.620043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.620073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.620103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.620136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.620165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.620194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.620224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.620260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.620290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.620323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.620352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.620381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.620411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.620440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.620476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.620504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.620532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.620562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.620590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.620620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.620750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.620780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.620810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.620845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.620894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.620925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.620956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.620989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.621024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.621052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.621076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.621109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.621141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.621177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.621207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.621238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.621270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.621301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.621333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.621364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.621392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.621423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.621452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.621483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.621510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.621539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.621571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.621603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.991 [2024-07-24 19:47:12.621635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.621664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.621692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.621720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.621751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.621783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.621812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.621839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.621870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.621899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.621932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.621966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.621995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.622025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.622054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.622083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.622120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.622148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.622190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.622224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.622254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.622286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.622314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.622343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.622371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.622399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.622427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.622458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.622485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.622517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.622549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.622586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.622618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.622645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.622674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.623079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.623110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.623135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.623164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.623197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.623232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.623261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.623290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.623324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.623353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.623389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.623418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.623448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.623475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.623505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.623530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.623560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.623591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.623629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.623654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.623682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.623716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.623744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.623775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.623803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.623834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.623864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.623893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.623924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.623952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.623980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.624008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.624036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.624064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.624094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.624124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.624158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.624191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.624226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.624258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.624288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.624318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.624347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.624374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.624407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.624436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.624465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.624491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.624518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.624549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.624579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.624607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.624637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.624665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.624694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.624722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.624750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.624787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.624819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.624848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.624878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.624906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.624936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.992 [2024-07-24 19:47:12.624967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.625096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.625121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.625148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.625178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.625643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.625674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.625706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.625734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.625764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.625791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.625824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.625877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.625910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.625940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.625982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.626014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.626043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.626077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.626104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.626136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.626165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.626193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.626229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.626269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.626299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.626330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.626356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.626383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.626413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.626443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.626477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.626511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.626541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.626569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.626602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.626634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.626663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.626693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.626722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.626752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.626782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.626809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.626846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.626875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.626902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.626931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.626959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.626999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.627026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.627056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.627085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.627115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.627150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.627179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.627214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.627246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.627275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.627316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.627347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.627377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.627405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.627434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.627465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.627823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.627859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.627889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.627919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.627945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.627975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.628005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.628036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.628065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.628097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.628133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.628161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.628194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.628230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.628261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.628290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.628322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.628350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.628382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.628410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.628438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.628468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.628499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.628526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.628557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.628586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.628616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.628643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.628673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.628701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.628733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.628763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.993 [2024-07-24 19:47:12.628794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.628827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.628858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.628888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.628919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.628950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.628977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.629005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.629034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.629066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.629094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.629124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.629154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.629182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.629213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.629249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.629281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.629310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.629340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.629367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.629400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.629429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.629461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.629492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.629520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.629547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.629574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.629606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.629634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.629664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.629690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.629722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.629859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.629891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.629920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.629951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.630190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.630223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.630251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.630285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.630313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.630341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.630373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.630399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.630440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.630471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.630502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.630541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.630573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.630603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.630631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.630663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.630696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.630722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.630750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.630784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.630814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.630843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.630873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.630902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.630933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.630963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.630992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.631020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.631046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.631077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.631104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.631133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.631165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.631193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.631223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.631250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.631277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.631310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.631339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.631372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.631401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.631430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.631463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.631493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.631520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.631550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.631579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.631603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.631626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.631650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.631674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.631697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.631721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.631745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.994 [2024-07-24 19:47:12.631769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.995 [2024-07-24 19:47:12.631799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.995 [2024-07-24 19:47:12.631828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.995 [2024-07-24 19:47:12.631863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.995 [2024-07-24 19:47:12.631892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.995 [2024-07-24 19:47:12.632261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.995 [2024-07-24 19:47:12.632296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.995 [2024-07-24 19:47:12.632328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.995 [2024-07-24 19:47:12.632363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.632392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.632422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.632453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.632483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.632512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.632543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.632572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.632598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.632625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.632654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.632681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.632709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.632741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.632778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.632811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.632840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.632865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.632895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.632928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.632958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.632988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.633012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.633039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.633069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.633098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.633129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.633158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.633186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.633219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.633249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.633282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.633315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.633346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.633376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.633404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.633434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.633492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.633520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.633549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.633580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.633608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.633641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.633670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.633697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.633726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.633756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.633787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.633817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.633854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.633885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.633912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.633938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.633967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.634002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.634036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.634062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.634092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.634123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.634150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.634177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.634514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.634552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.634581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.634611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.634641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.634673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.634703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.634733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.634761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.634790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.634818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.634849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.634881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.634909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.634938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.634966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.634999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.635026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.635055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.635082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.635109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.635137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.635164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.635190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.635228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.635257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.635283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.635311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.635340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.635368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.635401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.635431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.635460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.635487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.635522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.635551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.996 [2024-07-24 19:47:12.635580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.635608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.635635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.635667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.635696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.635732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.635762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.635792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.635826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.635855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.635883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.635913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.635944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.635971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.636000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.636032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.636061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.636089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.636119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.636150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.636180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.636215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.636243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.636271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.636301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.636336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.636365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.636752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.636782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.636812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.636843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.636870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.636915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.636944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.636975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.637005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.637036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.637069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.637099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.637136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.637169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.637197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.637230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.637257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.637296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.637328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.637357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.637386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.637412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.637443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.637470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.637501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.637533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.637562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.637588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.637616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.637647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.637676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.637705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.637734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.637767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.637797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.637826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.637855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.637883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.637921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.637951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.637993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.638021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.638051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.638082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.638109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.638141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.638172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.638206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.638250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.638280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.638310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.638345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.638372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.638403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.638433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.638464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.638499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.638527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.638561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.638591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.638620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.638651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.638681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.638715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.639045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.639078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.639107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.639136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.639164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.639205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.639233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.997 [2024-07-24 19:47:12.639259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.639285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.639315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.639343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.639371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.639397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.639428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.639457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.639487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.639515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.639540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.639569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.639599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.639628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.639655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.639698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.639727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.639762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.639790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.639819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.639852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.639883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.639913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.639944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.639972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.639996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.640025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.640054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.640087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.640116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.640146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.640179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.640207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.640238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.640270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.640298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.640325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.640357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.640385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.640415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.640445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.640476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.640507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.640537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.640564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.640592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.640621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.640652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.640683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.640709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.640737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.640767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.640799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.640832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.640861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.640894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.641374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.641409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.641438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.641468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.641507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.641538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.641566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.641593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.641622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.641653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.641680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.641716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.641745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.641772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.641811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.641841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.641869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.641898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.641924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.641952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.641984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.642021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.642050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.642081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.642116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.642145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.642178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.642216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.642243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.642272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.642301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.642332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.642361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.642388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.642417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.642445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.642472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.642505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.642532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.642563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.642593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.998 [2024-07-24 19:47:12.642624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.642653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.642688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.642715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.642759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.642791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.642819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.642852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.642882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.642921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.642949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.642977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.643007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.643035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.643067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.643096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.643126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.643158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.643188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.643222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.643256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.643293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.643321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.643685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.643717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.643750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.643777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.643808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.643839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.643870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.643900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.643933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.643964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.643994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.644023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.644054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:24.999 [2024-07-24 19:47:12.644084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.644115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.644146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.644174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.644209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.644239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.644268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.644317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.644347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.644377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.644408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.644437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.644469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.644498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.644525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.644569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.644597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.644628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.644656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.644684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.644723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.644755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.644784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.644813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.644842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.644872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.644902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.644931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.644973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.645000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.645030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.645061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.645089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.645113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.645142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.645177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.645212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.645244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.645272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.645301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.645331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.645357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.645385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.645413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.645452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.645481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.645509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.645540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.645569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.645598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.645967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.645999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.646023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.646055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.646085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.646112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.646142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.646174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.646207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.646238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:24.999 [2024-07-24 19:47:12.646268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.646299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.646328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.646357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.646390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.646427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.646456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.646484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.646518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.646547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.646583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.646614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.646641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.646673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.646702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.646731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.646760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.646792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.646817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.646847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.646880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.646907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.646935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.646963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.646993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.647022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.647052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.647082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.647113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.647140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.647171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.647203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.647237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.647265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.647292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.647321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.647350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.647391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.647426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.647459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.647488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.647514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.647544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.647567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.647591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.647615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.647640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.647663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.647687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.647711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.647744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.647774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.647803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.647837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.648190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.648224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.648253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.648281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.648309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.648341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.648373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.648404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.648434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.648464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.648493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.648521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.648551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.648583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.648614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.648641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.648674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.648711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.648738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.648788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.648816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.648844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.648873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.648903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.648930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.648960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.000 [2024-07-24 19:47:12.648993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.649026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.649056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.649082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.649112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.649143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.649169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.649198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.649231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.649257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.649281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.649304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.649332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.649363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.649393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.649421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.649448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.649476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.649505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.649535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.649565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.649594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.649622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.649658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.649686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.649715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.649746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.649775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.649804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.649834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.649873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.649901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.649931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.649960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.649989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.650017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.650048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.650402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.650432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.650457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.650488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.650514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.650541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.650576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.650605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.650634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.650662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.650693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.650726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.650755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.650786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.650815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.650844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.650870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.650899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.650932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.650960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.650989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.651017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.651046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.651078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.651109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.651136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.651165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.651198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.651231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.651260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.651289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.651324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.651356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.651385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.651420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.651449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.651482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.651514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.651544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.651573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.651602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.651630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.651657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.651697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.651728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.651759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.651788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.651818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.651845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.651873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.651901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.651941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.651974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.652001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.652028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.652056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.652087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.652115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.652143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.652171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.652207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.001 [2024-07-24 19:47:12.652237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.652263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.652290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.652710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.652746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.652776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.652813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.652840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.652871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.652928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.652957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.652985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.653014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.653047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.653087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.653116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.653140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.653172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.653206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.653234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.653262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.653286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.653318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.653347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.653379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.653408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.653433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.653462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.653490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.653521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.653553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.653583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.653613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.653640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.653668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.653723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.653753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.653782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.653812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.653841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.653866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.653897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.653927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.653957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.653987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.654014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.654044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.654075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.654106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.654135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.654165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.654192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.654227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.654257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.654281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.654305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.654329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.654354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.654378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.654401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.654425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.654449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.654477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.654506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.654532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.654560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.654918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.654947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.654977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.655015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.655051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.655081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.655112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.655143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.655173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.655208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.655242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.655270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.655295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.655324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.655351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.655382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.655410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.655438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.655468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.655496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.655526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.655556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.655587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.655614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.655645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.655676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.655705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.655732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.655766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.655794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.655822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.002 [2024-07-24 19:47:12.655851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.655878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.655907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.655932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.655955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.655984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.656015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.656045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.656076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.656105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.656131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.656171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.656206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.656234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.656262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.656291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.656328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.656358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.656389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.656416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.656447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.656477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.656507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.656536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.656567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.656596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.656631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.656661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.656691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.656718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.656746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.656789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.656816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.657160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.657193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.657232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.657266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.657295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.657328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.657358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.657387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.657416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.657451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.657478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.657509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.657537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.657570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.657599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.657627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.657655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.657705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.657734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.657766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.657793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.657822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.657860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.657894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.657924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.657957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.657986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.658036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.658068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.658096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.658126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.658156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.658186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.658219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.658252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.658281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.658307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.658338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.658368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.658406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.658438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.658467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.658496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.658526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.658559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.658588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.658616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.658647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.658676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.658705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.658734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.658768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.658798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.658829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.658858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.658886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.658914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.658944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.658975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.659005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.659031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.659062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.659096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.003 [2024-07-24 19:47:12.659453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.659485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.659517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.659547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.659574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.659604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.659632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.659661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.659689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.659719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.659749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.659777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.659805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.659830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.659863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.659893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.659921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.659953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.659981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.660016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.660049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.660079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.660113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.660143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.660174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.660209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.660237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.660288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.660318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.660347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.660377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.660408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.660440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.660469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.660502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.660529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.660556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.660591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.660616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.660645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.660676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.660705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.660733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.660765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.660800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.660830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.660859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.660888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.660918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.660947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.660976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.661003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.661027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.661057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.661088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.661118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.661152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.661183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.661216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.661243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.661283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.661311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.661342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.661373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.661733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.661765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.661794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.661827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.661856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.661885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.661916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.661945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.661986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.662018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.662045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.662072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.662100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.662133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.662163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.662193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.662229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.662259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.662287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.662318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.662346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.662374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.662436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.662466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.662495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.662523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.662553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.662584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.662616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.662648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.004 [2024-07-24 19:47:12.662676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.662705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.662734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.662763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.662791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.662821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.662849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.662879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.662908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.662938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.662970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.663001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.663033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.663061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.663089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.663117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.663147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.663186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.663218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.663246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.663285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.663318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.663353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.663384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.663414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.663441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.663468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.663496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.663529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.663560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.663589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.663620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.663649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.664001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.664031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.664065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.664094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.664123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.664152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.664179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.664217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.664247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.664278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.664309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.664340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.664369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.664396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.664422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.664452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.664486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.664515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.664539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.664570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.664604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.664642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.664676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.664703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.664731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.664759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.664788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.664815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.664844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.664876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.664905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.664936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.664965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.664994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.665026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.665053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.665084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.665116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.665147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.665176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.005 [2024-07-24 19:47:12.665214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.665248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.665281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.665307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.665343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.665371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.665400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.665429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.665458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.665486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.665514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.665542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.665570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.665596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.665622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.665654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.665683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.665712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.665746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.665777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.665808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.665840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.665870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.665901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.666275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.666306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.666336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.666365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.666394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.666423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.666453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.666481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.666509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.666541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.666569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.666617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.666648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.666678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.666719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.666750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.666778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.666806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.666831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.666860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.666894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.666925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.666956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.666983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.667011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.667039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.667067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.667100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.667131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.667161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.667189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.667228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.667257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.667286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.667315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.667343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.667373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.667401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.667430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.667480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.667512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.667547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.667575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.667603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.667648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.667675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.667716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.667744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.667773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.667803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.667831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.667856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.667889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.667921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.667951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.667979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.668009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.668038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.668068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.668097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.668126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.668154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.668188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.668555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.668588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.006 [2024-07-24 19:47:12.668618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.668649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.668680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.668708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.668761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.668791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.668820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.668849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.668879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.668910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.668940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.668968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.669001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.669033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.669062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.669093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.669122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.669160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.669187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.669221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.669255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.669289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.669317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.669346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.669371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.669400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.669427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.669457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.669486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.669526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.669556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.669583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.669610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.669639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.669673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.669707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.669736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.669761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.669792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.669822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.669851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.669877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.669907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.669937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.669967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.670000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.670033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.670065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.670093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.670122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.670152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.670191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.670224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.670256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.670287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.670317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.670348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.670377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.670411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.670442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.670473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.670509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.670853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.670882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.670912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.670941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.670965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.670996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.671025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.671052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.671079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.671108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.671142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.671177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.671209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.671238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.671267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.671293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.671325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.671354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.671385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.671417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.671445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.671478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.671506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.671536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.671563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.671593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.007 [2024-07-24 19:47:12.671624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.671657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.671687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.671717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.671746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.671774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.671804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.671834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.671863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.671892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.671921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.671949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.671977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.672005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.672034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.672064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.672091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.672123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.672153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.672180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.672215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.672242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.672273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.672301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.672328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.672358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.672387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.672418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.672447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.672476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.672543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.672575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.672605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.672639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.672668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.672698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.672731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.673079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.673112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.673141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.673166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.673197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.673236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.673267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.673295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.673324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.673362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.673390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.673424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.673454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.673482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.673512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.673542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.673575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.673605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.673640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.673669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.673698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.673733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.673762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.673792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.673821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.673847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.673878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.673907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.673939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.673968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.673995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.674021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.674051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.674085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.674119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.674149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.674177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.674205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.674233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.674267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.674300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.674330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.674358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.674384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.674415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.674444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.674474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.674503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.674533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.674566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.008 [2024-07-24 19:47:12.674600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.674630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.674661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.674691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.674721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.674751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.674780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.674807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.674839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.674867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.674895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.674926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.674954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.674984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.675343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.675372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.675409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.675439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.675468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.675496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.675525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.675582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.675610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.675645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.675675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.675704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.675734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.675764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.675791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.675819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.675849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.675876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.675903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.675933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.675969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.675998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.676028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.676055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.676083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.676112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.676144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.676178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.676211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.676240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.676271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.676305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.676336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.676369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.676398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.676429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.676458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.676489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.676515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.676543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.676574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.676602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.676656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.676686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.676713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.676745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.676773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.676801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.676829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.676886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.676916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.676946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.676976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.677005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.677043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.677074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.677101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.677134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.677162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.677205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.677235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.677264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.677295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.677646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.677676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.677705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.677742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.677770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.677799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.677829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.677857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.677885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.677910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.009 [2024-07-24 19:47:12.677941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.677969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.677998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.678027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.678052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.678076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.678100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.678124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.678149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.678173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.678206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.678238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.678268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.678302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.678332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.678364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.678395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.678426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.678454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.678486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.678515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.678546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.678577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.678604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.678632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.678656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.678680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.678705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.678729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.678753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.678776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.678800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.678825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.678884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.678912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.678942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.678976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.679005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.679059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.679088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.679119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.679149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.679177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.679212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.679243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.679272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.679314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.679343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.679372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.679401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.679429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.679457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.679483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.679514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.680257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.680294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:25.010 [2024-07-24 19:47:12.680326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.680353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.680382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.680413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.680444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.680473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.680501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.680537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.680567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.680598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.680628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.680657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.680693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.680721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.680751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.680781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.680810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.680836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.680863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.010 [2024-07-24 19:47:12.680891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.680917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.680947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.680973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.681004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.681036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.681072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.681102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.681132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.681156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.681184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.681220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.681253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.681283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.681314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.681340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.681369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.681398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.681427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.681459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.681488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.681519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.681548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.681578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.681608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.681639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.681668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.681700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.681729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.681760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.681793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.681823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.681858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.681887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.681914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.681945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.681974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.682031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.682061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.682091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.682119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.682149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.682328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.682358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.682393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.682423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.682452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.682481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.682510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.682540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.682569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.682597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.682624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.682658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.682685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.682714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.682744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.682774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.682800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.682834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.682868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.682897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.682925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.682958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.682991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.683020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.683049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.683077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.683105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.683135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.683170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.683195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.683233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.683264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.683292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.683324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.683353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.683382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.683413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.683441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.683472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.683503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.683530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.683557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.683584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.683614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.683640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.683668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.683697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.683725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.683758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.683786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.683815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.683845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.683874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.683911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.683938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.683967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.011 [2024-07-24 19:47:12.684000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.684028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.684062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.684091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.684122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.684155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.684184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.684218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.684588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.684622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.684651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.684680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.684710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.684743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.684771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.684807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.684835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.684864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.684892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.684919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.684951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.684979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.685010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.685035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.685065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.685095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.685126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.685153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.685186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.685223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.685252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.685282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.685317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.685347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.685375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.685409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.685436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.685470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.685501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.685531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.685562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.685590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.685619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.685647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.685674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.685705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.685736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.685776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.685805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.685828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.685852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.685876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.685903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.685934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.685966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.685996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.686024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.686054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.686082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.686105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.686132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.686161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.686190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.686223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.686262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.686290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.686322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.686350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.686378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.686412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.686442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.686792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.686821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.686880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.686911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.686940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.686970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.687001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.687028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.687056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.687085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.687117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.687150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.687179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.687211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.687240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.687265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.687293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.687325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.687359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.687390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.687417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.687450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.687478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.687511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.687535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.687559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.012 [2024-07-24 19:47:12.687583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.687607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.687631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.687655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.687683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.687712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.687743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.687772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.687801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.687830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.687859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.687904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.687934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.687964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.688001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.688030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.688065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.688095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.688123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.688154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.688184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.688216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.688245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.688276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.688308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.688630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.688661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.688690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.688718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.688748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.688776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.688805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.688838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.688866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.688902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.688941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.688969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.688997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.689028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.689054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.689081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.689122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.689152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.689179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.689211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.689244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.689274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.689311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.689341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.689369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.689399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.689425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.689455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.689482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.689512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.689541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.689569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.689601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.689631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.689661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.689688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.689719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.689748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.689779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.689808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.689840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.689868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.689906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.689937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.689966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.689998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.690029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.690063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.690093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.690122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.690172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.690205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.690237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.690267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.690296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.690325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.690352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.690386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.690410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.690437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.690469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.690499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.690528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.690559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.690723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.690753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.690786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.690813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.690843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.690871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.690900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.690927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.013 [2024-07-24 19:47:12.690957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.690982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.691006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.691030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.691371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.691397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.691422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.691448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.691472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.691497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.691522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.691546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.691571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.691599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.691628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.691661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.691690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.691717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.691750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.691778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.691811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.691839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.691868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.691894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.691926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.691954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.691984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.692014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.692040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.692065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.692089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.692113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.692137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.692162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.692186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.692213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.692238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.692262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.692287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.692313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.692337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.692361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.692385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.692409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.692433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.692465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.692493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.692521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.692549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.692581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.692611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.692642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.692667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.692690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.692715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.693082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.693115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.693144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.693183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.693218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.693245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.693272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.693299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.693335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.693364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.693393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.693422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.693450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.693486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.693515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.693548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.693574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.693603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.693631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.693661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.693698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.693730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.693760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.693787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.014 [2024-07-24 19:47:12.693814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.693844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.693871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.693901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.693931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.693960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.693990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.694027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.694058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.694087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.694113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.694142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.694173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.694209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.694237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.694265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.694292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.694321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.694360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.694388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.694423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.694454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.694483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.694511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.694540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.694573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.694604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.694901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.694941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.694970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.695005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.695033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.695063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.695092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.695122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.695151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.695180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.695211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.695242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.695271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.695303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.695334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.695364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.695397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.695424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.695453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.695481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.695519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.695554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.695587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.695617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.695645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.695672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.695700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.695728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.695752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.695786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.695816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.695843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.695874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.695904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.695931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.695961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.695988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.696020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.696051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.696083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.696110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.696138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.696168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.696198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.696228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.696261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.696293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.696319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.696347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.696375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.696406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.696436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.696471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.696501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.696528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.696553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.696581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.696610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.696649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.696677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.696701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.696732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.696761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.696790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.696931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.696965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.696995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.697027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.697056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.697086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.697116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.697142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.697172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.697198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.697235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.697267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.697755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.697787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.697818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.015 [2024-07-24 19:47:12.697845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.697870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.697894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.697918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.697942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.697966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.697990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.698020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.698048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.698079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.698108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.698137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.698171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.698196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.698225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.698250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.698278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.698305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.698334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.698378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.698408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.698436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.698467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.698500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.698526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.698550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.698574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.698597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.698621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.698645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.698668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.698692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.698716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.698740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.698764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.698788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.698812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.698846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.698874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.698905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.698931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.698954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.698978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.699002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.699027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.699054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.699083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.699107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.699131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.699155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.699179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.699209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.699234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.699258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.699283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.699306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.699331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.699355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.699379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.699407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.699473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.699610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.699639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.699682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.699713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.699800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.699833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.699864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.699893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.699922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.699951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.699979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.700010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.700044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.700073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.700102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.700132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.700165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.700194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.700229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.700258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.700286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.700316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.700341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.700371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.700400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.700429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.700460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.700488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.700515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.700543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.700573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.700600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.700628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.700657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.700684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.700716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.700748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.700777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.700813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.700840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.700873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.700904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.700934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.700967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.016 [2024-07-24 19:47:12.700995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.701031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.701064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.701092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.701124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.701153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.701182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.701216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.701244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.701273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.701305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.701339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.701372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.701401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.701432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.701463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.701492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.701522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.701553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.701911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.701943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.701978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.702014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.702043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.702071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.702098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.702127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.702156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.702185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.702221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.702247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.702281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.702310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.702338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.702367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.702401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.702438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.702465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.702495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.702523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.702551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.702579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.702603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.702634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.702666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.702695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.702725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.702755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.702784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.702811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.702839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.702872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.702901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.702930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.702957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.702985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.703014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.703043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.703077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.703106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.703135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.703168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.703197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.703230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.703266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.703293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.703321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.703347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.703378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.703406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.703437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.703470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.703498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.703525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.703554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.703584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.703617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.703644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.703670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.703699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.703723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.703747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.703779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.703902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.703926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.703951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.703975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.704598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.704632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.704663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.704690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.704719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.704747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.704776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.704804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.704841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.704865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.017 [2024-07-24 19:47:12.704897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.704929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.704957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.704981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.705005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.705029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.705053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.705076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.705101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.705124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.705148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.705173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.705196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.705229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.705256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.705285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.705319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.705346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.705378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.705408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.705448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.705478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.705506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.705536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.705564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.705597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.705625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.705656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.705686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.705716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.705749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.705777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.705812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.705843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.705871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.705899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.705935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.705968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.705998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.706027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.706056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.706084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.706113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.706143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.706177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.706211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.706242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.706271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.706298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.706451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.706478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.706508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.706539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.706567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.706596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.706623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.706673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.706702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.706731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.706764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.706791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.706826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.706857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.706886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.706917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.706946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.706977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.707007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.707039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.707071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.707103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.707130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.707160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.707187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.018 [2024-07-24 19:47:12.707223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.707252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.707287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.707322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.707355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.707382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.707412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.707439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.707466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.707491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.707522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.707552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.707581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.707617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.707651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.707683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.707712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.707737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.707763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.707790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.707815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.707845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.707877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.707905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.707933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.707965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.707996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.708028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.708057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.708084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.708118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.708148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.708179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.708213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.708242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.708291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.708320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.708351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.708391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.708526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.708557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.708588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.708618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.708858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.708885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.708912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.708942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.708971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.709005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.709034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.709064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.709092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.709118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.709152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.709181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.709216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.709247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.709275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.709306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.709335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.709365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.709393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.709422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.709454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.709485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.709514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.709541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.709598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.019 [2024-07-24 19:47:12.709625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.709651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.709685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.709716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.709745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.709774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.709799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.709830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.709863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.709891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.709921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.709951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.709982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.710013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.710041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.710071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.710099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.710131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.710155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.710186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.710223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.710252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.710283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.710328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.710360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.710393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.710430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.710458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.710496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.710523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.710584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.710615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.710643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.710674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.711016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.711050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.711081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.711110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.711140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.711169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.711198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.711251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.711278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.711308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.711336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.711366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.711398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.711425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.711456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.711485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.711515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.711557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.711586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.711615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.711648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.711677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.711705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.711735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.711764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.711791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.711825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.711852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.711880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.711909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.711940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.711970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.711999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.712034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.712068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.712098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.712126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.712154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.712183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.712221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.712249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.712278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.712313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.712341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.712402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.712430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.712462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.712489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.712518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.712544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.712573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.020 [2024-07-24 19:47:12.712599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.712634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.712670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.712698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.712725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.712755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.712787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.712818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.712852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.712881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.712908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.712935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.712965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.713096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.713127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.713159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.713191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.713440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.713470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.713500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.713531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.713562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.713598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.713628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.713678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.713705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.713741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.713769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.713798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.713828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.713854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.713888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.713920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.713947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.713985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.714015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.714044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.714074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.714102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.714133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.714162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.714189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.714223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.714260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.714293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.714323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.714351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.714376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.714406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.714436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.714465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.714493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.714521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.714552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.714592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.714624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.714652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.714681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.714711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.714735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.714769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.714796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.714824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.714867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.714898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.714926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.714956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.714988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.715018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.715044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.715077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.715106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.715162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.715191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.715224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.715251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:07:25.021 [2024-07-24 19:47:12.715607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.715648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.715680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.715710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.715740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.715767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.715797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.715823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.715853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.715884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.715912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.715945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.715975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.716007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.716036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.716065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.716098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.021 [2024-07-24 19:47:12.716126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.716158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.716189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.716225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.716253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.716282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.716317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.716345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.716379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.716408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.716436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.716468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.716494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.716540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.716567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.716596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.716623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.716657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.716687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.716714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.716745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.716773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.716802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.716835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.716874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.716903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.716934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.716964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.716996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.717030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.717058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.717088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.717116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.717145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.717194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.717232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.717259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.717289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.717317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.717351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.717382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.717412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.717446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.717476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.717503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.717537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.717565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.717752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.717785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.717818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.717866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.718336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.718368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.718424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.718453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.718480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.718511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.718539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.718591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.718619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.718655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.718683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.718711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.718740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.718770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.718801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.718830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.718885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.718912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.718943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.718972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.719001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.719031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.719060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.719091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.022 [2024-07-24 19:47:12.719119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.719147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.719180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.719215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.719243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.719270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.719297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.719321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.719356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.719386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.719415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.719444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.719472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.719502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.719536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.719570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.719605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.719643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.719672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.719698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.719732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.719765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.719797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.719829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.719859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.719888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.719918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.719948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.719979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.720010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.720037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.720079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.720111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.720143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.720181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.720337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.720369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.720397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.720425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.720456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.720487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.720533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.720561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.720591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.720632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.720658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.720688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.720720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.720754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.720784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.720812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.720839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.720873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.720905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.720936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.720964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.720993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.721022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.721062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.721092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.721120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.721149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.721176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.721215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.721246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.721274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.721304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.721330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.721360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.721390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.721420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.721455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.721483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.721511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.721547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.721577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.721605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.721638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.721668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.721698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.721727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.721755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.721790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.721821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.721845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.721876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.721908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.721939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.721967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.721995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.722022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.722048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.722076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.722107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.722140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.722170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.722205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.023 [2024-07-24 19:47:12.722235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.024 [2024-07-24 19:47:12.722262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.024 [2024-07-24 19:47:12.722393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.024 [2024-07-24 19:47:12.722424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:07:25.968 Initializing NVMe Controllers 00:07:25.968 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:25.968 Controller IO queue size 128, less than required. 00:07:25.968 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:25.968 Controller IO queue size 128, less than required. 00:07:25.968 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:25.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:25.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:25.968 Initialization complete. Launching workers. 00:07:25.968 ======================================================== 00:07:25.968 Latency(us) 00:07:25.968 Device Information : IOPS MiB/s Average min max 00:07:25.968 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1504.33 0.73 24505.10 2053.67 1275971.00 00:07:25.968 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8859.70 4.33 14447.13 2329.58 399988.67 00:07:25.968 ======================================================== 00:07:25.968 Total : 10364.03 5.06 15907.04 2053.67 1275971.00 00:07:25.968 00:07:25.968 19:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.228 19:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:07:26.228 19:47:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:07:26.228 true 00:07:26.228 19:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484117 00:07:26.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3484117) - No such process 00:07:26.228 19:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3484117 00:07:26.228 19:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.490 19:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:26.490 19:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:26.490 19:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:26.490 19:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:26.490 19:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:26.490 19:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:26.750 null0 00:07:26.750 19:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:26.750 19:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:26.750 19:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:27.011 null1 00:07:27.011 19:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:27.011 19:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:27.011 19:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:27.011 null2 00:07:27.011 19:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:27.011 19:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:27.011 19:47:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:27.270 null3 00:07:27.270 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:27.270 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:27.270 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:27.531 null4 00:07:27.531 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:27.531 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:27.531 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:27.531 null5 00:07:27.531 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:27.531 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:27.531 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:27.791 null6 00:07:27.791 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:27.791 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:27.791 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:28.052 null7 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:28.052 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:28.053 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:28.053 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:28.053 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:28.053 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:28.053 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:28.053 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.053 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:28.053 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:28.053 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:28.053 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:28.053 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:28.053 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:28.053 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:28.053 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.053 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:28.053 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:28.053 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:28.053 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3490637 3490638 3490641 3490642 3490644 3490646 3490647 3490650 00:07:28.053 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:28.053 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:28.053 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:28.053 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.053 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:28.053 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:28.053 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:28.053 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.053 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:28.053 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:28.053 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:28.314 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:28.314 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:28.314 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.314 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.314 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:28.314 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.314 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.314 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:28.314 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.314 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.314 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:28.314 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.314 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.314 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:28.314 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.314 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.314 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:28.314 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.314 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.314 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:28.314 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.314 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.314 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:28.314 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.314 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.314 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:28.314 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:28.575 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:28.575 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.575 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:28.575 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:28.575 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:28.575 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:28.575 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.575 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.575 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:28.575 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:28.575 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.575 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.575 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:28.575 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.575 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.575 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:28.835 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.835 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.835 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:28.835 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.835 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.835 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:28.835 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.835 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.835 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:28.835 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.835 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.835 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:28.835 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:28.835 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.835 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.835 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:28.835 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:28.835 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.835 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:28.835 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:28.835 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:28.835 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:28.835 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:29.096 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.096 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.096 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:29.096 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.096 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.096 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:29.096 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.096 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.096 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:29.096 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.096 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.096 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:29.096 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.096 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.096 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:29.096 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.096 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.096 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:29.096 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.096 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.096 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:29.096 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.096 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.096 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:29.096 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:29.096 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:29.096 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.096 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:29.357 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:29.357 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:29.357 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:29.357 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:29.357 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.357 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.357 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:29.357 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.357 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.357 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:29.357 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.358 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.358 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:29.358 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.358 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.358 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:29.358 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.358 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.358 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:29.358 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.358 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.358 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:29.358 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.358 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.358 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:29.358 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.358 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.358 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:29.619 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:29.619 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:29.619 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.619 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:29.619 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:29.619 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:29.619 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:29.619 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:29.619 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.619 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.619 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:29.619 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.619 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.619 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:29.619 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.619 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.619 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:29.619 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.619 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.619 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:29.619 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.619 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.619 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:29.879 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.879 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.879 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.879 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:29.879 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.879 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:29.879 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.879 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.879 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:29.879 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:29.879 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:29.879 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:29.879 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:29.879 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:29.879 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:29.879 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:29.879 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.879 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.879 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.879 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:30.139 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.139 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.139 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:30.139 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.139 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.139 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:30.139 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.139 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.139 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:30.139 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.139 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.139 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:30.139 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.139 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.139 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:30.139 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.139 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.139 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:30.139 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.139 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.139 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:30.139 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:30.139 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:30.139 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:30.139 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:30.139 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:30.139 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:30.399 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.399 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:30.399 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.399 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.399 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:30.399 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.399 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.399 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:30.399 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.399 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.399 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:30.399 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.399 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.399 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:30.399 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.399 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.399 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:30.399 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.399 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.399 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:30.399 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.399 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.399 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:30.399 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.399 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.399 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:30.399 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:30.659 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:30.659 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:30.659 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.659 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:30.659 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:30.659 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:30.659 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:30.659 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.659 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.659 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:30.659 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.659 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.659 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:30.659 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.659 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.659 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:30.659 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.659 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.659 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:30.659 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.659 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.659 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:30.659 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.659 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.659 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:30.919 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:30.919 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.919 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.919 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:30.919 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.919 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.919 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:30.919 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:30.919 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:30.919 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:30.919 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:30.919 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.919 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:30.919 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.919 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.919 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:30.919 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:30.919 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.919 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.919 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:31.179 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.179 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.179 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:31.179 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.179 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.179 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:31.179 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.179 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.179 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:31.179 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.179 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.179 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:31.179 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.179 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.179 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:31.179 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:31.179 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:31.179 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.179 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.179 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:31.179 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:31.179 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:31.179 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:31.179 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:31.180 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.180 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.180 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.180 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.180 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.440 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:31.440 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.440 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.440 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.440 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.440 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.440 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.440 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.440 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.440 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.440 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.440 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.440 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.440 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:31.440 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:31.441 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:31.441 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:07:31.441 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:31.441 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:07:31.441 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:31.441 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:31.441 rmmod nvme_tcp 00:07:31.702 rmmod nvme_fabrics 00:07:31.702 rmmod nvme_keyring 00:07:31.702 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:31.702 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:07:31.702 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:07:31.702 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3483543 ']' 00:07:31.702 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3483543 00:07:31.702 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 3483543 ']' 00:07:31.702 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 3483543 00:07:31.702 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:07:31.702 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:31.702 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3483543 00:07:31.702 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:31.702 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:31.702 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3483543' 00:07:31.702 killing process with pid 3483543 00:07:31.702 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 3483543 00:07:31.702 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 3483543 00:07:31.702 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:31.702 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:31.702 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:31.702 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:31.702 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:31.702 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.702 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:31.702 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.250 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:34.250 00:07:34.250 real 0m47.885s 00:07:34.250 user 3m10.605s 00:07:34.250 sys 0m15.418s 00:07:34.250 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.250 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:34.250 ************************************ 00:07:34.250 END TEST nvmf_ns_hotplug_stress 00:07:34.250 ************************************ 00:07:34.250 19:47:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:34.250 19:47:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:34.250 19:47:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.250 19:47:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:34.250 ************************************ 00:07:34.250 START TEST nvmf_delete_subsystem 00:07:34.250 ************************************ 00:07:34.250 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:34.250 * Looking for test storage... 00:07:34.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.250 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:34.250 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:34.250 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:34.250 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:34.250 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:34.250 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:34.250 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:34.250 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:34.250 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:34.250 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:34.250 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:34.250 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:34.250 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:34.250 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:34.250 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:34.250 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:34.250 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:34.250 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:34.250 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:34.250 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:34.250 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:34.250 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:34.250 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.250 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.250 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.250 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:34.250 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.250 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:07:34.251 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:34.251 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:34.251 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:34.251 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:34.251 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:34.251 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:34.251 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:34.251 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:34.251 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:34.251 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:34.251 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:34.251 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:34.251 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:34.251 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:34.251 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.251 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:34.251 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.251 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:34.251 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:34.251 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:34.251 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:40.842 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:40.842 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:40.842 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:40.842 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:40.842 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:41.103 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:41.103 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:41.103 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:41.103 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:41.103 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:41.103 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:41.103 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:41.103 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:41.103 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:07:41.103 00:07:41.103 --- 10.0.0.2 ping statistics --- 00:07:41.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.103 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:07:41.103 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:41.103 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:41.103 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.396 ms 00:07:41.103 00:07:41.103 --- 10.0.0.1 ping statistics --- 00:07:41.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.103 rtt min/avg/max/mdev = 0.396/0.396/0.396/0.000 ms 00:07:41.103 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:41.103 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:07:41.103 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:41.103 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:41.103 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:41.103 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:41.103 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:41.103 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:41.103 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:41.103 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:41.103 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:41.103 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:41.103 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.364 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3495800 00:07:41.364 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3495800 00:07:41.364 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 3495800 ']' 00:07:41.364 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.364 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:41.364 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.364 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:41.364 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.364 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:41.364 [2024-07-24 19:47:29.122677] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:07:41.364 [2024-07-24 19:47:29.122745] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.364 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.364 [2024-07-24 19:47:29.192875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:41.364 [2024-07-24 19:47:29.266546] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:41.364 [2024-07-24 19:47:29.266587] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:41.364 [2024-07-24 19:47:29.266595] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:41.364 [2024-07-24 19:47:29.266601] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:41.364 [2024-07-24 19:47:29.266607] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:41.364 [2024-07-24 19:47:29.266757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.364 [2024-07-24 19:47:29.266759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.935 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:41.935 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:07:41.935 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:41.935 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:41.935 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:42.196 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:42.196 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:42.196 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.196 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:42.196 [2024-07-24 19:47:29.926163] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:42.196 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.196 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:42.196 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.196 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:42.197 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.197 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:42.197 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.197 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:42.197 [2024-07-24 19:47:29.942312] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:42.197 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.197 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:42.197 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.197 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:42.197 NULL1 00:07:42.197 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.197 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:42.197 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.197 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:42.197 Delay0 00:07:42.197 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.197 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.197 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.197 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:42.197 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.197 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3496044 00:07:42.197 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:42.197 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:42.197 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.197 [2024-07-24 19:47:30.036978] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:44.110 19:47:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:44.110 19:47:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.110 19:47:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:44.370 Read completed with error (sct=0, sc=8) 00:07:44.370 Read completed with error (sct=0, sc=8) 00:07:44.370 starting I/O failed: -6 00:07:44.370 Write completed with error (sct=0, sc=8) 00:07:44.370 Read completed with error (sct=0, sc=8) 00:07:44.370 Read completed with error (sct=0, sc=8) 00:07:44.370 Read completed with error (sct=0, sc=8) 00:07:44.370 starting I/O failed: -6 00:07:44.370 Write completed with error (sct=0, sc=8) 00:07:44.370 Read completed with error (sct=0, sc=8) 00:07:44.370 Write completed with error (sct=0, sc=8) 00:07:44.370 Write completed with error (sct=0, sc=8) 00:07:44.370 starting I/O failed: -6 00:07:44.370 Read completed with error (sct=0, sc=8) 00:07:44.370 Read completed with error (sct=0, sc=8) 00:07:44.370 Read completed with error (sct=0, sc=8) 00:07:44.370 Read completed with error (sct=0, sc=8) 00:07:44.370 starting I/O failed: -6 00:07:44.370 Read completed with error (sct=0, sc=8) 00:07:44.370 Read completed with error (sct=0, sc=8) 00:07:44.370 Read completed with error (sct=0, sc=8) 00:07:44.370 Read completed with error (sct=0, sc=8) 00:07:44.370 starting I/O failed: -6 00:07:44.370 Read completed with error (sct=0, sc=8) 00:07:44.370 Read completed with error (sct=0, sc=8) 00:07:44.370 Write completed with error (sct=0, sc=8) 00:07:44.370 Read completed with error (sct=0, sc=8) 00:07:44.370 starting I/O failed: -6 00:07:44.370 Write completed with error (sct=0, sc=8) 00:07:44.370 Read completed with error (sct=0, sc=8) 00:07:44.370 Read completed with error (sct=0, sc=8) 00:07:44.370 Read completed with error (sct=0, sc=8) 00:07:44.371 starting I/O failed: -6 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 starting I/O failed: -6 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 starting I/O failed: -6 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 starting I/O failed: -6 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 [2024-07-24 19:47:32.252731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2300710 is same with the state(5) to be set 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 [2024-07-24 19:47:32.253750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2300000 is same with the state(5) to be set 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 starting I/O failed: -6 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 starting I/O failed: -6 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 starting I/O failed: -6 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 starting I/O failed: -6 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 starting I/O failed: -6 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 starting I/O failed: -6 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 starting I/O failed: -6 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 starting I/O failed: -6 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 starting I/O failed: -6 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 starting I/O failed: -6 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 starting I/O failed: -6 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 starting I/O failed: -6 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 starting I/O failed: -6 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 starting I/O failed: -6 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 starting I/O failed: -6 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 starting I/O failed: -6 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 starting I/O failed: -6 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 starting I/O failed: -6 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 starting I/O failed: -6 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 starting I/O failed: -6 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 starting I/O failed: -6 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 starting I/O failed: -6 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 starting I/O failed: -6 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 starting I/O failed: -6 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 starting I/O failed: -6 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 starting I/O failed: -6 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 starting I/O failed: -6 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 starting I/O failed: -6 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 starting I/O failed: -6 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 starting I/O failed: -6 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 starting I/O failed: -6 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 starting I/O failed: -6 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 starting I/O failed: -6 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 starting I/O failed: -6 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 Read completed with error (sct=0, sc=8) 00:07:44.371 starting I/O failed: -6 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.371 Write completed with error (sct=0, sc=8) 00:07:44.372 starting I/O failed: -6 00:07:44.372 starting I/O failed: -6 00:07:44.372 starting I/O failed: -6 00:07:44.372 starting I/O failed: -6 00:07:44.372 starting I/O failed: -6 00:07:44.372 starting I/O failed: -6 00:07:44.372 starting I/O failed: -6 00:07:44.372 starting I/O failed: -6 00:07:44.372 starting I/O failed: -6 00:07:44.372 starting I/O failed: -6 00:07:44.372 starting I/O failed: -6 00:07:44.372 starting I/O failed: -6 00:07:44.372 starting I/O failed: -6 00:07:44.372 starting I/O failed: -6 00:07:44.372 starting I/O failed: -6 00:07:45.315 [2024-07-24 19:47:33.219313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2301ac0 is same with the state(5) to be set 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Write completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Write completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Write completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Write completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Write completed with error (sct=0, sc=8) 00:07:45.315 Write completed with error (sct=0, sc=8) 00:07:45.315 Write completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Write completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 [2024-07-24 19:47:33.256141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23003e0 is same with the state(5) to be set 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Write completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Write completed with error (sct=0, sc=8) 00:07:45.315 Write completed with error (sct=0, sc=8) 00:07:45.315 Write completed with error (sct=0, sc=8) 00:07:45.315 Write completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Write completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Write completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Write completed with error (sct=0, sc=8) 00:07:45.315 Write completed with error (sct=0, sc=8) 00:07:45.315 [2024-07-24 19:47:33.256496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2300a40 is same with the state(5) to be set 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Write completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Write completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Write completed with error (sct=0, sc=8) 00:07:45.315 Write completed with error (sct=0, sc=8) 00:07:45.315 Write completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Write completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Write completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Write completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 [2024-07-24 19:47:33.259621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f95f400d7a0 is same with the state(5) to be set 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Write completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Write completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Write completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Write completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Write completed with error (sct=0, sc=8) 00:07:45.315 Write completed with error (sct=0, sc=8) 00:07:45.315 Write completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Write completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Write completed with error (sct=0, sc=8) 00:07:45.315 Write completed with error (sct=0, sc=8) 00:07:45.315 Write completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Write completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Write completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 Read completed with error (sct=0, sc=8) 00:07:45.315 [2024-07-24 19:47:33.260852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f95f400d000 is same with the state(5) to be set 00:07:45.315 Initializing NVMe Controllers 00:07:45.315 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:45.315 Controller IO queue size 128, less than required. 00:07:45.315 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:45.315 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:45.315 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:45.315 Initialization complete. Launching workers. 00:07:45.315 ======================================================== 00:07:45.315 Latency(us) 00:07:45.315 Device Information : IOPS MiB/s Average min max 00:07:45.315 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 161.39 0.08 913712.52 810.82 1006343.49 00:07:45.315 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 180.32 0.09 926937.85 395.15 1010647.76 00:07:45.315 ======================================================== 00:07:45.315 Total : 341.72 0.17 920691.48 395.15 1010647.76 00:07:45.315 00:07:45.315 [2024-07-24 19:47:33.261483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2301ac0 (9): Bad file descriptor 00:07:45.315 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:45.315 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.315 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:45.315 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3496044 00:07:45.315 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:45.887 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:45.887 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3496044 00:07:45.887 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3496044) - No such process 00:07:45.887 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3496044 00:07:45.887 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:45.887 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3496044 00:07:45.887 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:45.887 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:45.887 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:45.887 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:45.887 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3496044 00:07:45.887 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:45.887 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:45.887 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:45.887 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:45.887 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:45.887 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.887 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:45.887 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.887 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:45.887 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.887 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:45.887 [2024-07-24 19:47:33.793303] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:45.888 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.888 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.888 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.888 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:45.888 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.888 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3496830 00:07:45.888 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:45.888 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:45.888 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3496830 00:07:45.888 19:47:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:46.148 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.148 [2024-07-24 19:47:33.860025] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:46.409 19:47:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:46.409 19:47:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3496830 00:07:46.409 19:47:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:46.982 19:47:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:46.982 19:47:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3496830 00:07:46.982 19:47:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:47.554 19:47:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:47.554 19:47:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3496830 00:07:47.554 19:47:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:48.127 19:47:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:48.128 19:47:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3496830 00:07:48.128 19:47:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:48.438 19:47:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:48.438 19:47:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3496830 00:07:48.438 19:47:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:49.014 19:47:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:49.014 19:47:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3496830 00:07:49.014 19:47:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:49.014 Initializing NVMe Controllers 00:07:49.014 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:49.014 Controller IO queue size 128, less than required. 00:07:49.014 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:49.014 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:49.014 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:49.014 Initialization complete. Launching workers. 00:07:49.014 ======================================================== 00:07:49.014 Latency(us) 00:07:49.014 Device Information : IOPS MiB/s Average min max 00:07:49.014 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002053.23 1000290.72 1005534.55 00:07:49.014 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003286.77 1000352.80 1009791.30 00:07:49.014 ======================================================== 00:07:49.014 Total : 256.00 0.12 1002670.00 1000290.72 1009791.30 00:07:49.014 00:07:49.586 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:49.586 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3496830 00:07:49.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3496830) - No such process 00:07:49.586 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3496830 00:07:49.586 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:49.586 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:49.586 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:49.586 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:07:49.586 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:49.586 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:07:49.586 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:49.586 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:49.586 rmmod nvme_tcp 00:07:49.586 rmmod nvme_fabrics 00:07:49.586 rmmod nvme_keyring 00:07:49.586 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:49.586 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:07:49.586 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:07:49.586 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3495800 ']' 00:07:49.586 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3495800 00:07:49.586 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 3495800 ']' 00:07:49.586 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 3495800 00:07:49.586 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:07:49.586 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:49.586 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3495800 00:07:49.586 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:49.586 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:49.586 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3495800' 00:07:49.586 killing process with pid 3495800 00:07:49.586 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 3495800 00:07:49.586 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 3495800 00:07:49.847 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:49.847 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:49.847 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:49.847 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:49.847 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:49.847 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.847 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:49.847 19:47:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.762 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:51.762 00:07:51.762 real 0m17.895s 00:07:51.762 user 0m30.847s 00:07:51.762 sys 0m6.129s 00:07:51.762 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.762 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:51.762 ************************************ 00:07:51.762 END TEST nvmf_delete_subsystem 00:07:51.762 ************************************ 00:07:51.762 19:47:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:51.762 19:47:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:51.762 19:47:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.762 19:47:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:52.024 ************************************ 00:07:52.024 START TEST nvmf_host_management 00:07:52.024 ************************************ 00:07:52.024 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:52.024 * Looking for test storage... 00:07:52.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:52.024 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:52.024 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:52.024 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:52.024 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:52.024 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:52.024 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:52.024 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:52.024 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:52.024 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:52.024 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:52.024 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:52.024 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:52.024 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:52.025 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:52.025 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:52.025 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:52.025 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:52.025 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:52.025 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:52.025 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:52.025 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:52.025 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:52.025 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.025 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.025 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.025 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:52.025 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.025 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:52.025 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:52.025 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:52.025 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:52.025 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:52.025 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:52.025 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:52.025 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:52.025 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:52.025 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:52.025 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:52.025 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:52.025 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:52.025 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:52.025 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:52.025 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:52.025 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:52.025 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.025 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:52.025 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.025 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:52.025 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:52.025 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:07:52.025 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:00.177 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:00.177 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:00.177 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:00.178 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:00.178 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:00.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:00.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.758 ms 00:08:00.178 00:08:00.178 --- 10.0.0.2 ping statistics --- 00:08:00.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.178 rtt min/avg/max/mdev = 0.758/0.758/0.758/0.000 ms 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:00.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:00.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.412 ms 00:08:00.178 00:08:00.178 --- 10.0.0.1 ping statistics --- 00:08:00.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.178 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:00.178 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:00.178 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:00.178 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:00.178 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:00.178 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:00.178 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.178 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:00.178 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3501725 00:08:00.178 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3501725 00:08:00.178 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3501725 ']' 00:08:00.178 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.178 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:00.178 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.178 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:00.178 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.178 [2024-07-24 19:47:47.062071] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:08:00.178 [2024-07-24 19:47:47.062128] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.178 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.178 [2024-07-24 19:47:47.146515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:00.178 [2024-07-24 19:47:47.214370] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:00.178 [2024-07-24 19:47:47.214413] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:00.178 [2024-07-24 19:47:47.214420] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:00.178 [2024-07-24 19:47:47.214427] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:00.178 [2024-07-24 19:47:47.214432] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:00.178 [2024-07-24 19:47:47.214547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.178 [2024-07-24 19:47:47.214706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:00.178 [2024-07-24 19:47:47.214824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.178 [2024-07-24 19:47:47.214826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:00.178 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:00.178 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:00.178 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:00.178 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:00.178 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.178 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:00.178 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:00.178 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.178 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.179 [2024-07-24 19:47:47.878092] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:00.179 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.179 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:00.179 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:00.179 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.179 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:00.179 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:00.179 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:00.179 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.179 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.179 Malloc0 00:08:00.179 [2024-07-24 19:47:47.941431] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.179 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.179 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:00.179 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:00.179 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.179 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3501894 00:08:00.179 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3501894 /var/tmp/bdevperf.sock 00:08:00.179 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3501894 ']' 00:08:00.179 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:00.179 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:00.179 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:00.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:00.179 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:00.179 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:00.179 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:00.179 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.179 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:00.179 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:00.179 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:00.179 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:00.179 { 00:08:00.179 "params": { 00:08:00.179 "name": "Nvme$subsystem", 00:08:00.179 "trtype": "$TEST_TRANSPORT", 00:08:00.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:00.179 "adrfam": "ipv4", 00:08:00.179 "trsvcid": "$NVMF_PORT", 00:08:00.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:00.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:00.179 "hdgst": ${hdgst:-false}, 00:08:00.179 "ddgst": ${ddgst:-false} 00:08:00.179 }, 00:08:00.179 "method": "bdev_nvme_attach_controller" 00:08:00.179 } 00:08:00.179 EOF 00:08:00.179 )") 00:08:00.179 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:00.179 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:00.179 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:00.179 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:00.179 "params": { 00:08:00.179 "name": "Nvme0", 00:08:00.179 "trtype": "tcp", 00:08:00.179 "traddr": "10.0.0.2", 00:08:00.179 "adrfam": "ipv4", 00:08:00.179 "trsvcid": "4420", 00:08:00.179 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:00.179 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:00.179 "hdgst": false, 00:08:00.179 "ddgst": false 00:08:00.179 }, 00:08:00.179 "method": "bdev_nvme_attach_controller" 00:08:00.179 }' 00:08:00.179 [2024-07-24 19:47:48.040429] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:08:00.179 [2024-07-24 19:47:48.040481] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3501894 ] 00:08:00.179 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.179 [2024-07-24 19:47:48.099182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.440 [2024-07-24 19:47:48.163664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.701 Running I/O for 10 seconds... 00:08:00.964 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:00.964 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:00.964 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:00.964 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.964 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.964 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.964 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:00.964 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:00.964 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:00.964 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:00.964 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:00.964 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:00.964 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:00.964 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:00.964 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:00.964 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:00.964 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.964 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.964 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.964 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=387 00:08:00.964 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 387 -ge 100 ']' 00:08:00.964 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:00.964 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:00.964 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:00.964 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:00.964 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.964 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.964 [2024-07-24 19:47:48.888514] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148c2a0 is same with the state(5) to be set 00:08:00.964 [2024-07-24 19:47:48.889301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.964 [2024-07-24 19:47:48.889335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.964 [2024-07-24 19:47:48.889352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.964 [2024-07-24 19:47:48.889362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.964 [2024-07-24 19:47:48.889372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.964 [2024-07-24 19:47:48.889380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.964 [2024-07-24 19:47:48.889389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.964 [2024-07-24 19:47:48.889397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.964 [2024-07-24 19:47:48.889406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.964 [2024-07-24 19:47:48.889413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.964 [2024-07-24 19:47:48.889422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.964 [2024-07-24 19:47:48.889429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.964 [2024-07-24 19:47:48.889438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.964 [2024-07-24 19:47:48.889446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.964 [2024-07-24 19:47:48.889456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.964 [2024-07-24 19:47:48.889463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.964 [2024-07-24 19:47:48.889473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.964 [2024-07-24 19:47:48.889485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.964 [2024-07-24 19:47:48.889494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.964 [2024-07-24 19:47:48.889502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.964 [2024-07-24 19:47:48.889512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.964 [2024-07-24 19:47:48.889519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.964 [2024-07-24 19:47:48.889528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.964 [2024-07-24 19:47:48.889535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.964 [2024-07-24 19:47:48.889544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.964 [2024-07-24 19:47:48.889551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.964 [2024-07-24 19:47:48.889560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.964 [2024-07-24 19:47:48.889567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.964 [2024-07-24 19:47:48.889577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.964 [2024-07-24 19:47:48.889584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.964 [2024-07-24 19:47:48.889593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.964 [2024-07-24 19:47:48.889599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.964 [2024-07-24 19:47:48.889609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.964 [2024-07-24 19:47:48.889616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.964 [2024-07-24 19:47:48.889626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.964 [2024-07-24 19:47:48.889633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.964 [2024-07-24 19:47:48.889642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.964 [2024-07-24 19:47:48.889649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.964 [2024-07-24 19:47:48.889658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.964 [2024-07-24 19:47:48.889666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.964 [2024-07-24 19:47:48.889675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.964 [2024-07-24 19:47:48.889682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.964 [2024-07-24 19:47:48.889693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.964 [2024-07-24 19:47:48.889700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.964 [2024-07-24 19:47:48.889709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.964 [2024-07-24 19:47:48.889716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.964 [2024-07-24 19:47:48.889726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.964 [2024-07-24 19:47:48.889733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.964 [2024-07-24 19:47:48.889742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.964 [2024-07-24 19:47:48.889749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.964 [2024-07-24 19:47:48.889758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.964 [2024-07-24 19:47:48.889765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.965 [2024-07-24 19:47:48.889775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.965 [2024-07-24 19:47:48.889782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.965 [2024-07-24 19:47:48.889792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.965 [2024-07-24 19:47:48.889799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.965 [2024-07-24 19:47:48.889809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.965 [2024-07-24 19:47:48.889816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.965 [2024-07-24 19:47:48.889825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.965 [2024-07-24 19:47:48.889832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.965 [2024-07-24 19:47:48.889842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.965 [2024-07-24 19:47:48.889849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.965 [2024-07-24 19:47:48.889858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.965 [2024-07-24 19:47:48.889864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.965 [2024-07-24 19:47:48.889873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.965 [2024-07-24 19:47:48.889880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.965 [2024-07-24 19:47:48.889890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.965 [2024-07-24 19:47:48.889899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.965 [2024-07-24 19:47:48.889908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.965 [2024-07-24 19:47:48.889915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.965 [2024-07-24 19:47:48.889924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.965 [2024-07-24 19:47:48.889932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.965 [2024-07-24 19:47:48.889942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.965 [2024-07-24 19:47:48.889949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.965 [2024-07-24 19:47:48.889959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.965 [2024-07-24 19:47:48.889966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.965 [2024-07-24 19:47:48.889975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.965 [2024-07-24 19:47:48.889983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.965 [2024-07-24 19:47:48.889993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.965 [2024-07-24 19:47:48.890000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.965 [2024-07-24 19:47:48.890010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.965 [2024-07-24 19:47:48.890016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.965 [2024-07-24 19:47:48.890026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.965 [2024-07-24 19:47:48.890033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.965 [2024-07-24 19:47:48.890043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.965 [2024-07-24 19:47:48.890050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.965 [2024-07-24 19:47:48.890059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.965 [2024-07-24 19:47:48.890066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.965 [2024-07-24 19:47:48.890075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.965 [2024-07-24 19:47:48.890082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.965 [2024-07-24 19:47:48.890092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.965 [2024-07-24 19:47:48.890099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.965 [2024-07-24 19:47:48.890111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.965 [2024-07-24 19:47:48.890118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.965 [2024-07-24 19:47:48.890128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.965 [2024-07-24 19:47:48.890134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.965 [2024-07-24 19:47:48.890143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.965 [2024-07-24 19:47:48.890151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.965 [2024-07-24 19:47:48.890160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.965 [2024-07-24 19:47:48.890167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.965 [2024-07-24 19:47:48.890177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.965 [2024-07-24 19:47:48.890183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.965 [2024-07-24 19:47:48.890192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.965 [2024-07-24 19:47:48.890204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.965 [2024-07-24 19:47:48.890214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.965 [2024-07-24 19:47:48.890221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.965 [2024-07-24 19:47:48.890231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.965 [2024-07-24 19:47:48.890238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.965 [2024-07-24 19:47:48.890247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.965 [2024-07-24 19:47:48.890254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.965 [2024-07-24 19:47:48.890263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.965 [2024-07-24 19:47:48.890270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.965 [2024-07-24 19:47:48.890279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.965 [2024-07-24 19:47:48.890286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.965 [2024-07-24 19:47:48.890295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.965 [2024-07-24 19:47:48.890302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.965 [2024-07-24 19:47:48.890311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.965 [2024-07-24 19:47:48.890320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.965 [2024-07-24 19:47:48.890329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.965 [2024-07-24 19:47:48.890336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.965 [2024-07-24 19:47:48.890345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.965 [2024-07-24 19:47:48.890352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.965 [2024-07-24 19:47:48.890361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.965 [2024-07-24 19:47:48.890369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.965 [2024-07-24 19:47:48.890378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.965 [2024-07-24 19:47:48.890386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.965 [2024-07-24 19:47:48.890396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:00.965 [2024-07-24 19:47:48.890403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:00.965 [2024-07-24 19:47:48.890411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ac4f0 is same with the state(5) to be set 00:08:00.966 [2024-07-24 19:47:48.890450] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16ac4f0 was disconnected and freed. reset controller. 00:08:00.966 [2024-07-24 19:47:48.891661] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:00.966 task offset: 60928 on job bdev=Nvme0n1 fails 00:08:00.966 00:08:00.966 Latency(us) 00:08:00.966 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:00.966 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:00.966 Job: Nvme0n1 ended in about 0.46 seconds with error 00:08:00.966 Verification LBA range: start 0x0 length 0x400 00:08:00.966 Nvme0n1 : 0.46 964.81 60.30 137.83 0.00 56578.69 1829.55 51336.53 00:08:00.966 =================================================================================================================== 00:08:00.966 Total : 964.81 60.30 137.83 0.00 56578.69 1829.55 51336.53 00:08:00.966 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.966 [2024-07-24 19:47:48.893658] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:00.966 [2024-07-24 19:47:48.893680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129b3b0 (9): Bad file descriptor 00:08:00.966 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:00.966 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.966 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.966 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.966 19:47:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:01.226 [2024-07-24 19:47:48.957063] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:02.167 19:47:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3501894 00:08:02.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3501894) - No such process 00:08:02.167 19:47:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:02.167 19:47:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:02.167 19:47:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:02.167 19:47:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:02.167 19:47:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:02.167 19:47:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:02.167 19:47:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:02.167 19:47:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:02.167 { 00:08:02.167 "params": { 00:08:02.167 "name": "Nvme$subsystem", 00:08:02.167 "trtype": "$TEST_TRANSPORT", 00:08:02.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:02.167 "adrfam": "ipv4", 00:08:02.167 "trsvcid": "$NVMF_PORT", 00:08:02.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:02.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:02.167 "hdgst": ${hdgst:-false}, 00:08:02.167 "ddgst": ${ddgst:-false} 00:08:02.167 }, 00:08:02.167 "method": "bdev_nvme_attach_controller" 00:08:02.167 } 00:08:02.167 EOF 00:08:02.167 )") 00:08:02.167 19:47:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:02.167 19:47:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:02.167 19:47:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:02.167 19:47:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:02.167 "params": { 00:08:02.167 "name": "Nvme0", 00:08:02.167 "trtype": "tcp", 00:08:02.167 "traddr": "10.0.0.2", 00:08:02.167 "adrfam": "ipv4", 00:08:02.167 "trsvcid": "4420", 00:08:02.167 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:02.167 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:02.167 "hdgst": false, 00:08:02.167 "ddgst": false 00:08:02.167 }, 00:08:02.167 "method": "bdev_nvme_attach_controller" 00:08:02.167 }' 00:08:02.167 [2024-07-24 19:47:49.964032] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:08:02.167 [2024-07-24 19:47:49.964085] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3502252 ] 00:08:02.167 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.167 [2024-07-24 19:47:50.023978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.167 [2024-07-24 19:47:50.102335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.428 Running I/O for 1 seconds... 00:08:03.369 00:08:03.369 Latency(us) 00:08:03.369 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:03.369 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:03.369 Verification LBA range: start 0x0 length 0x400 00:08:03.369 Nvme0n1 : 1.06 1025.16 64.07 0.00 0.00 61559.37 15182.51 53739.52 00:08:03.369 =================================================================================================================== 00:08:03.369 Total : 1025.16 64.07 0.00 0.00 61559.37 15182.51 53739.52 00:08:03.628 19:47:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:03.628 19:47:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:03.628 19:47:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:03.628 19:47:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:03.628 19:47:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:03.628 19:47:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:03.629 19:47:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:08:03.629 19:47:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:03.629 19:47:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:08:03.629 19:47:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:03.629 19:47:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:03.629 rmmod nvme_tcp 00:08:03.629 rmmod nvme_fabrics 00:08:03.629 rmmod nvme_keyring 00:08:03.629 19:47:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:03.629 19:47:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:08:03.629 19:47:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:08:03.629 19:47:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3501725 ']' 00:08:03.629 19:47:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3501725 00:08:03.629 19:47:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 3501725 ']' 00:08:03.629 19:47:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 3501725 00:08:03.629 19:47:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:08:03.629 19:47:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:03.629 19:47:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3501725 00:08:03.629 19:47:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:03.629 19:47:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:03.629 19:47:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3501725' 00:08:03.629 killing process with pid 3501725 00:08:03.629 19:47:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 3501725 00:08:03.629 19:47:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 3501725 00:08:03.888 [2024-07-24 19:47:51.683222] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:03.888 19:47:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:03.888 19:47:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:03.888 19:47:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:03.888 19:47:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:03.888 19:47:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:03.888 19:47:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:03.888 19:47:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:03.888 19:47:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:06.441 00:08:06.441 real 0m14.034s 00:08:06.441 user 0m22.442s 00:08:06.441 sys 0m6.268s 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:06.441 ************************************ 00:08:06.441 END TEST nvmf_host_management 00:08:06.441 ************************************ 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:06.441 ************************************ 00:08:06.441 START TEST nvmf_lvol 00:08:06.441 ************************************ 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:06.441 * Looking for test storage... 00:08:06.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:06.441 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:08:06.442 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:13.104 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:13.104 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:13.104 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:13.104 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:13.104 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:13.105 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:13.105 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:13.105 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:13.105 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:13.105 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:13.105 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:13.105 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:13.105 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:13.105 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:13.105 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:13.105 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:13.105 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:13.105 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:13.105 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:13.105 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:13.366 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:13.366 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:13.366 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.503 ms 00:08:13.366 00:08:13.366 --- 10.0.0.2 ping statistics --- 00:08:13.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.366 rtt min/avg/max/mdev = 0.503/0.503/0.503/0.000 ms 00:08:13.366 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:13.366 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:13.366 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.339 ms 00:08:13.366 00:08:13.366 --- 10.0.0.1 ping statistics --- 00:08:13.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.366 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:08:13.366 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:13.366 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:08:13.367 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:13.367 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:13.367 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:13.367 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:13.367 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:13.367 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:13.367 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:13.367 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:13.367 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:13.367 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:13.367 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:13.367 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3506970 00:08:13.367 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3506970 00:08:13.367 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:13.367 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 3506970 ']' 00:08:13.367 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.367 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:13.367 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.367 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:13.367 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:13.367 [2024-07-24 19:48:01.199156] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:08:13.367 [2024-07-24 19:48:01.199213] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:13.367 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.367 [2024-07-24 19:48:01.265988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:13.627 [2024-07-24 19:48:01.332413] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:13.627 [2024-07-24 19:48:01.332453] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:13.627 [2024-07-24 19:48:01.332460] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:13.627 [2024-07-24 19:48:01.332466] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:13.627 [2024-07-24 19:48:01.332472] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:13.627 [2024-07-24 19:48:01.332608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.627 [2024-07-24 19:48:01.332740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:13.627 [2024-07-24 19:48:01.332743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.199 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:14.199 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:14.199 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:14.199 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:14.199 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:14.199 19:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:14.199 19:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:14.461 [2024-07-24 19:48:02.156945] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:14.461 19:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:14.461 19:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:14.461 19:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:14.722 19:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:14.722 19:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:14.984 19:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:14.984 19:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=856b2f61-92d0-486b-8523-a2564670a159 00:08:14.984 19:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 856b2f61-92d0-486b-8523-a2564670a159 lvol 20 00:08:15.245 19:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=accfa6c7-64f6-4c8f-bf71-f37f49359520 00:08:15.245 19:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:15.505 19:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 accfa6c7-64f6-4c8f-bf71-f37f49359520 00:08:15.505 19:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:15.766 [2024-07-24 19:48:03.565602] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:15.766 19:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:16.026 19:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3507423 00:08:16.026 19:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:16.026 19:48:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:16.026 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.971 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot accfa6c7-64f6-4c8f-bf71-f37f49359520 MY_SNAPSHOT 00:08:17.232 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=25069592-a9c8-4d07-b0b9-9a50722ea69c 00:08:17.232 19:48:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize accfa6c7-64f6-4c8f-bf71-f37f49359520 30 00:08:17.232 19:48:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 25069592-a9c8-4d07-b0b9-9a50722ea69c MY_CLONE 00:08:17.493 19:48:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=23580ce8-6e82-47ab-a624-8c747035a782 00:08:17.493 19:48:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 23580ce8-6e82-47ab-a624-8c747035a782 00:08:17.754 19:48:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3507423 00:08:27.757 Initializing NVMe Controllers 00:08:27.757 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:27.757 Controller IO queue size 128, less than required. 00:08:27.757 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:27.757 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:27.757 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:27.757 Initialization complete. Launching workers. 00:08:27.757 ======================================================== 00:08:27.757 Latency(us) 00:08:27.757 Device Information : IOPS MiB/s Average min max 00:08:27.757 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12178.90 47.57 10514.64 1580.83 65992.01 00:08:27.757 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17751.30 69.34 7210.88 689.78 47229.87 00:08:27.757 ======================================================== 00:08:27.757 Total : 29930.20 116.91 8555.21 689.78 65992.01 00:08:27.757 00:08:27.757 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:27.757 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete accfa6c7-64f6-4c8f-bf71-f37f49359520 00:08:27.757 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 856b2f61-92d0-486b-8523-a2564670a159 00:08:27.757 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:27.757 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:27.757 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:27.757 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:27.757 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:27.757 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:27.757 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:27.757 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:27.757 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:27.757 rmmod nvme_tcp 00:08:27.757 rmmod nvme_fabrics 00:08:27.757 rmmod nvme_keyring 00:08:27.757 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:27.757 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:27.757 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:27.757 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3506970 ']' 00:08:27.757 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3506970 00:08:27.757 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 3506970 ']' 00:08:27.757 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 3506970 00:08:27.757 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:27.757 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:27.757 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3506970 00:08:27.757 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:27.757 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:27.757 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3506970' 00:08:27.757 killing process with pid 3506970 00:08:27.757 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 3506970 00:08:27.757 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 3506970 00:08:27.757 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:27.757 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:27.757 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:27.757 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:27.757 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:27.757 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.757 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:27.757 19:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.144 19:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:29.144 00:08:29.144 real 0m23.059s 00:08:29.144 user 1m3.490s 00:08:29.144 sys 0m7.672s 00:08:29.144 19:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:29.144 19:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:29.144 ************************************ 00:08:29.144 END TEST nvmf_lvol 00:08:29.144 ************************************ 00:08:29.144 19:48:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:29.144 19:48:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:29.144 19:48:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:29.144 19:48:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:29.144 ************************************ 00:08:29.144 START TEST nvmf_lvs_grow 00:08:29.144 ************************************ 00:08:29.144 19:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:29.144 * Looking for test storage... 00:08:29.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.144 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:29.144 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:08:29.407 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:37.553 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:37.553 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:37.553 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:37.554 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:37.554 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.554 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:37.554 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:37.554 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:37.554 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:37.554 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.554 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:37.554 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:37.554 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.554 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:37.554 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.554 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:37.554 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:37.554 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:37.554 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:37.554 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.554 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:37.554 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:37.554 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.554 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:37.554 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:08:37.554 19:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:37.554 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:37.554 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:08:37.554 00:08:37.554 --- 10.0.0.2 ping statistics --- 00:08:37.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.554 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:37.554 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:37.554 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.410 ms 00:08:37.554 00:08:37.554 --- 10.0.0.1 ping statistics --- 00:08:37.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.554 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3514362 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3514362 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 3514362 ']' 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.554 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:37.555 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.555 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:37.555 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:37.555 [2024-07-24 19:48:24.424403] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:08:37.555 [2024-07-24 19:48:24.424471] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.555 EAL: No free 2048 kB hugepages reported on node 1 00:08:37.555 [2024-07-24 19:48:24.497546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.555 [2024-07-24 19:48:24.571270] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:37.555 [2024-07-24 19:48:24.571312] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:37.555 [2024-07-24 19:48:24.571320] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:37.555 [2024-07-24 19:48:24.571327] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:37.555 [2024-07-24 19:48:24.571333] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:37.555 [2024-07-24 19:48:24.571353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.555 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:37.555 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:37.555 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:37.555 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:37.555 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:37.555 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:37.555 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:37.555 [2024-07-24 19:48:25.386602] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:37.555 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:37.555 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:37.555 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.555 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:37.555 ************************************ 00:08:37.555 START TEST lvs_grow_clean 00:08:37.555 ************************************ 00:08:37.555 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:37.555 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:37.555 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:37.555 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:37.555 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:37.555 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:37.555 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:37.555 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:37.555 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:37.555 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:37.816 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:37.816 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:38.076 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=fbc1b7a4-f3ba-4483-99f7-dcf70ff5a74f 00:08:38.076 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fbc1b7a4-f3ba-4483-99f7-dcf70ff5a74f 00:08:38.076 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:38.076 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:38.076 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:38.076 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fbc1b7a4-f3ba-4483-99f7-dcf70ff5a74f lvol 150 00:08:38.336 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=2e80faf1-ff7f-4da2-b692-6847f32707c5 00:08:38.336 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:38.336 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:38.336 [2024-07-24 19:48:26.278286] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:38.337 [2024-07-24 19:48:26.278341] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:38.337 true 00:08:38.597 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fbc1b7a4-f3ba-4483-99f7-dcf70ff5a74f 00:08:38.597 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:38.597 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:38.597 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:38.857 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2e80faf1-ff7f-4da2-b692-6847f32707c5 00:08:38.857 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:39.117 [2024-07-24 19:48:26.884172] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:39.117 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:39.117 19:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3514920 00:08:39.117 19:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:39.117 19:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:39.117 19:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3514920 /var/tmp/bdevperf.sock 00:08:39.117 19:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 3514920 ']' 00:08:39.117 19:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:39.117 19:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:39.117 19:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:39.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:39.117 19:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:39.117 19:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:39.418 [2024-07-24 19:48:27.101589] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:08:39.418 [2024-07-24 19:48:27.101637] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3514920 ] 00:08:39.418 EAL: No free 2048 kB hugepages reported on node 1 00:08:39.418 [2024-07-24 19:48:27.178064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.418 [2024-07-24 19:48:27.242469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.998 19:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:39.998 19:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:39.998 19:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:40.569 Nvme0n1 00:08:40.569 19:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:40.569 [ 00:08:40.569 { 00:08:40.569 "name": "Nvme0n1", 00:08:40.569 "aliases": [ 00:08:40.569 "2e80faf1-ff7f-4da2-b692-6847f32707c5" 00:08:40.569 ], 00:08:40.569 "product_name": "NVMe disk", 00:08:40.569 "block_size": 4096, 00:08:40.569 "num_blocks": 38912, 00:08:40.569 "uuid": "2e80faf1-ff7f-4da2-b692-6847f32707c5", 00:08:40.569 "assigned_rate_limits": { 00:08:40.569 "rw_ios_per_sec": 0, 00:08:40.569 "rw_mbytes_per_sec": 0, 00:08:40.569 "r_mbytes_per_sec": 0, 00:08:40.569 "w_mbytes_per_sec": 0 00:08:40.569 }, 00:08:40.569 "claimed": false, 00:08:40.569 "zoned": false, 00:08:40.569 "supported_io_types": { 00:08:40.569 "read": true, 00:08:40.569 "write": true, 00:08:40.569 "unmap": true, 00:08:40.569 "flush": true, 00:08:40.569 "reset": true, 00:08:40.569 "nvme_admin": true, 00:08:40.569 "nvme_io": true, 00:08:40.570 "nvme_io_md": false, 00:08:40.570 "write_zeroes": true, 00:08:40.570 "zcopy": false, 00:08:40.570 "get_zone_info": false, 00:08:40.570 "zone_management": false, 00:08:40.570 "zone_append": false, 00:08:40.570 "compare": true, 00:08:40.570 "compare_and_write": true, 00:08:40.570 "abort": true, 00:08:40.570 "seek_hole": false, 00:08:40.570 "seek_data": false, 00:08:40.570 "copy": true, 00:08:40.570 "nvme_iov_md": false 00:08:40.570 }, 00:08:40.570 "memory_domains": [ 00:08:40.570 { 00:08:40.570 "dma_device_id": "system", 00:08:40.570 "dma_device_type": 1 00:08:40.570 } 00:08:40.570 ], 00:08:40.570 "driver_specific": { 00:08:40.570 "nvme": [ 00:08:40.570 { 00:08:40.570 "trid": { 00:08:40.570 "trtype": "TCP", 00:08:40.570 "adrfam": "IPv4", 00:08:40.570 "traddr": "10.0.0.2", 00:08:40.570 "trsvcid": "4420", 00:08:40.570 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:40.570 }, 00:08:40.570 "ctrlr_data": { 00:08:40.570 "cntlid": 1, 00:08:40.570 "vendor_id": "0x8086", 00:08:40.570 "model_number": "SPDK bdev Controller", 00:08:40.570 "serial_number": "SPDK0", 00:08:40.570 "firmware_revision": "24.09", 00:08:40.570 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:40.570 "oacs": { 00:08:40.570 "security": 0, 00:08:40.570 "format": 0, 00:08:40.570 "firmware": 0, 00:08:40.570 "ns_manage": 0 00:08:40.570 }, 00:08:40.570 "multi_ctrlr": true, 00:08:40.570 "ana_reporting": false 00:08:40.570 }, 00:08:40.570 "vs": { 00:08:40.570 "nvme_version": "1.3" 00:08:40.570 }, 00:08:40.570 "ns_data": { 00:08:40.570 "id": 1, 00:08:40.570 "can_share": true 00:08:40.570 } 00:08:40.570 } 00:08:40.570 ], 00:08:40.570 "mp_policy": "active_passive" 00:08:40.570 } 00:08:40.570 } 00:08:40.570 ] 00:08:40.570 19:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3515256 00:08:40.570 19:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:40.570 19:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:40.570 Running I/O for 10 seconds... 00:08:41.954 Latency(us) 00:08:41.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.954 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.954 Nvme0n1 : 1.00 17569.00 68.63 0.00 0.00 0.00 0.00 0.00 00:08:41.954 =================================================================================================================== 00:08:41.954 Total : 17569.00 68.63 0.00 0.00 0.00 0.00 0.00 00:08:41.954 00:08:42.526 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fbc1b7a4-f3ba-4483-99f7-dcf70ff5a74f 00:08:42.787 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.787 Nvme0n1 : 2.00 17648.50 68.94 0.00 0.00 0.00 0.00 0.00 00:08:42.787 =================================================================================================================== 00:08:42.787 Total : 17648.50 68.94 0.00 0.00 0.00 0.00 0.00 00:08:42.787 00:08:42.787 true 00:08:42.787 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fbc1b7a4-f3ba-4483-99f7-dcf70ff5a74f 00:08:42.787 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:42.787 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:42.787 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:42.787 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3515256 00:08:43.730 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.730 Nvme0n1 : 3.00 17691.00 69.11 0.00 0.00 0.00 0.00 0.00 00:08:43.731 =================================================================================================================== 00:08:43.731 Total : 17691.00 69.11 0.00 0.00 0.00 0.00 0.00 00:08:43.731 00:08:44.673 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.673 Nvme0n1 : 4.00 17726.25 69.24 0.00 0.00 0.00 0.00 0.00 00:08:44.673 =================================================================================================================== 00:08:44.673 Total : 17726.25 69.24 0.00 0.00 0.00 0.00 0.00 00:08:44.673 00:08:45.624 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.624 Nvme0n1 : 5.00 17753.80 69.35 0.00 0.00 0.00 0.00 0.00 00:08:45.624 =================================================================================================================== 00:08:45.624 Total : 17753.80 69.35 0.00 0.00 0.00 0.00 0.00 00:08:45.624 00:08:46.571 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:46.571 Nvme0n1 : 6.00 17778.83 69.45 0.00 0.00 0.00 0.00 0.00 00:08:46.571 =================================================================================================================== 00:08:46.571 Total : 17778.83 69.45 0.00 0.00 0.00 0.00 0.00 00:08:46.571 00:08:47.957 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.957 Nvme0n1 : 7.00 17796.71 69.52 0.00 0.00 0.00 0.00 0.00 00:08:47.957 =================================================================================================================== 00:08:47.957 Total : 17796.71 69.52 0.00 0.00 0.00 0.00 0.00 00:08:47.957 00:08:48.900 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.900 Nvme0n1 : 8.00 17811.12 69.57 0.00 0.00 0.00 0.00 0.00 00:08:48.900 =================================================================================================================== 00:08:48.900 Total : 17811.12 69.57 0.00 0.00 0.00 0.00 0.00 00:08:48.900 00:08:49.843 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.843 Nvme0n1 : 9.00 17824.11 69.63 0.00 0.00 0.00 0.00 0.00 00:08:49.843 =================================================================================================================== 00:08:49.843 Total : 17824.11 69.63 0.00 0.00 0.00 0.00 0.00 00:08:49.843 00:08:50.787 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.787 Nvme0n1 : 10.00 17835.30 69.67 0.00 0.00 0.00 0.00 0.00 00:08:50.787 =================================================================================================================== 00:08:50.787 Total : 17835.30 69.67 0.00 0.00 0.00 0.00 0.00 00:08:50.787 00:08:50.787 00:08:50.787 Latency(us) 00:08:50.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:50.787 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.787 Nvme0n1 : 10.01 17835.84 69.67 0.00 0.00 7171.70 4642.13 13161.81 00:08:50.787 =================================================================================================================== 00:08:50.787 Total : 17835.84 69.67 0.00 0.00 7171.70 4642.13 13161.81 00:08:50.787 0 00:08:50.787 19:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3514920 00:08:50.787 19:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 3514920 ']' 00:08:50.787 19:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 3514920 00:08:50.787 19:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:50.787 19:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:50.787 19:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3514920 00:08:50.787 19:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:50.787 19:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:50.787 19:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3514920' 00:08:50.787 killing process with pid 3514920 00:08:50.787 19:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 3514920 00:08:50.787 Received shutdown signal, test time was about 10.000000 seconds 00:08:50.787 00:08:50.787 Latency(us) 00:08:50.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:50.787 =================================================================================================================== 00:08:50.787 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:50.787 19:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 3514920 00:08:50.787 19:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:51.048 19:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:51.309 19:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fbc1b7a4-f3ba-4483-99f7-dcf70ff5a74f 00:08:51.309 19:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:51.309 19:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:51.309 19:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:51.309 19:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:51.570 [2024-07-24 19:48:39.355796] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:51.570 19:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fbc1b7a4-f3ba-4483-99f7-dcf70ff5a74f 00:08:51.570 19:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:51.570 19:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fbc1b7a4-f3ba-4483-99f7-dcf70ff5a74f 00:08:51.570 19:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:51.571 19:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:51.571 19:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:51.571 19:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:51.571 19:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:51.571 19:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:51.571 19:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:51.571 19:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:51.571 19:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fbc1b7a4-f3ba-4483-99f7-dcf70ff5a74f 00:08:51.831 request: 00:08:51.831 { 00:08:51.831 "uuid": "fbc1b7a4-f3ba-4483-99f7-dcf70ff5a74f", 00:08:51.831 "method": "bdev_lvol_get_lvstores", 00:08:51.831 "req_id": 1 00:08:51.831 } 00:08:51.831 Got JSON-RPC error response 00:08:51.831 response: 00:08:51.831 { 00:08:51.831 "code": -19, 00:08:51.831 "message": "No such device" 00:08:51.831 } 00:08:51.832 19:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:51.832 19:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:51.832 19:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:51.832 19:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:51.832 19:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:51.832 aio_bdev 00:08:51.832 19:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2e80faf1-ff7f-4da2-b692-6847f32707c5 00:08:51.832 19:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=2e80faf1-ff7f-4da2-b692-6847f32707c5 00:08:51.832 19:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:51.832 19:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:51.832 19:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:51.832 19:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:51.832 19:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:52.092 19:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2e80faf1-ff7f-4da2-b692-6847f32707c5 -t 2000 00:08:52.092 [ 00:08:52.092 { 00:08:52.092 "name": "2e80faf1-ff7f-4da2-b692-6847f32707c5", 00:08:52.092 "aliases": [ 00:08:52.092 "lvs/lvol" 00:08:52.092 ], 00:08:52.092 "product_name": "Logical Volume", 00:08:52.092 "block_size": 4096, 00:08:52.092 "num_blocks": 38912, 00:08:52.092 "uuid": "2e80faf1-ff7f-4da2-b692-6847f32707c5", 00:08:52.092 "assigned_rate_limits": { 00:08:52.092 "rw_ios_per_sec": 0, 00:08:52.092 "rw_mbytes_per_sec": 0, 00:08:52.092 "r_mbytes_per_sec": 0, 00:08:52.092 "w_mbytes_per_sec": 0 00:08:52.092 }, 00:08:52.092 "claimed": false, 00:08:52.092 "zoned": false, 00:08:52.092 "supported_io_types": { 00:08:52.092 "read": true, 00:08:52.092 "write": true, 00:08:52.092 "unmap": true, 00:08:52.092 "flush": false, 00:08:52.092 "reset": true, 00:08:52.092 "nvme_admin": false, 00:08:52.092 "nvme_io": false, 00:08:52.092 "nvme_io_md": false, 00:08:52.092 "write_zeroes": true, 00:08:52.092 "zcopy": false, 00:08:52.092 "get_zone_info": false, 00:08:52.092 "zone_management": false, 00:08:52.092 "zone_append": false, 00:08:52.092 "compare": false, 00:08:52.092 "compare_and_write": false, 00:08:52.092 "abort": false, 00:08:52.092 "seek_hole": true, 00:08:52.092 "seek_data": true, 00:08:52.092 "copy": false, 00:08:52.092 "nvme_iov_md": false 00:08:52.092 }, 00:08:52.092 "driver_specific": { 00:08:52.092 "lvol": { 00:08:52.092 "lvol_store_uuid": "fbc1b7a4-f3ba-4483-99f7-dcf70ff5a74f", 00:08:52.092 "base_bdev": "aio_bdev", 00:08:52.092 "thin_provision": false, 00:08:52.093 "num_allocated_clusters": 38, 00:08:52.093 "snapshot": false, 00:08:52.093 "clone": false, 00:08:52.093 "esnap_clone": false 00:08:52.093 } 00:08:52.093 } 00:08:52.093 } 00:08:52.093 ] 00:08:52.093 19:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:52.093 19:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fbc1b7a4-f3ba-4483-99f7-dcf70ff5a74f 00:08:52.093 19:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:52.354 19:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:52.354 19:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fbc1b7a4-f3ba-4483-99f7-dcf70ff5a74f 00:08:52.354 19:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:52.615 19:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:52.615 19:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2e80faf1-ff7f-4da2-b692-6847f32707c5 00:08:52.615 19:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fbc1b7a4-f3ba-4483-99f7-dcf70ff5a74f 00:08:52.875 19:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:53.136 19:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:53.136 00:08:53.136 real 0m15.460s 00:08:53.136 user 0m15.087s 00:08:53.136 sys 0m1.329s 00:08:53.136 19:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.136 19:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:53.136 ************************************ 00:08:53.136 END TEST lvs_grow_clean 00:08:53.136 ************************************ 00:08:53.136 19:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:53.136 19:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:53.136 19:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.136 19:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:53.136 ************************************ 00:08:53.136 START TEST lvs_grow_dirty 00:08:53.136 ************************************ 00:08:53.136 19:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:53.136 19:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:53.136 19:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:53.136 19:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:53.136 19:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:53.136 19:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:53.136 19:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:53.136 19:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:53.136 19:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:53.136 19:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:53.398 19:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:53.398 19:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:53.658 19:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=07c43812-503b-4b58-9894-ae3c967531b3 00:08:53.658 19:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 07c43812-503b-4b58-9894-ae3c967531b3 00:08:53.658 19:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:53.658 19:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:53.659 19:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:53.659 19:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 07c43812-503b-4b58-9894-ae3c967531b3 lvol 150 00:08:53.920 19:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=11e581a9-47fa-4318-8747-9677eaf027b1 00:08:53.920 19:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:53.920 19:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:53.920 [2024-07-24 19:48:41.805245] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:53.920 [2024-07-24 19:48:41.805299] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:53.920 true 00:08:53.920 19:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 07c43812-503b-4b58-9894-ae3c967531b3 00:08:53.920 19:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:54.180 19:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:54.180 19:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:54.180 19:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 11e581a9-47fa-4318-8747-9677eaf027b1 00:08:54.441 19:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:54.702 [2024-07-24 19:48:42.427154] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:54.702 19:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:54.702 19:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3518024 00:08:54.702 19:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:54.702 19:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:54.702 19:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3518024 /var/tmp/bdevperf.sock 00:08:54.702 19:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3518024 ']' 00:08:54.702 19:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:54.702 19:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:54.702 19:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:54.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:54.702 19:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:54.702 19:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:54.702 [2024-07-24 19:48:42.642533] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:08:54.702 [2024-07-24 19:48:42.642590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3518024 ] 00:08:54.963 EAL: No free 2048 kB hugepages reported on node 1 00:08:54.963 [2024-07-24 19:48:42.718155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.963 [2024-07-24 19:48:42.772135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.577 19:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:55.577 19:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:55.577 19:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:55.841 Nvme0n1 00:08:55.841 19:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:55.841 [ 00:08:55.842 { 00:08:55.842 "name": "Nvme0n1", 00:08:55.842 "aliases": [ 00:08:55.842 "11e581a9-47fa-4318-8747-9677eaf027b1" 00:08:55.842 ], 00:08:55.842 "product_name": "NVMe disk", 00:08:55.842 "block_size": 4096, 00:08:55.842 "num_blocks": 38912, 00:08:55.842 "uuid": "11e581a9-47fa-4318-8747-9677eaf027b1", 00:08:55.842 "assigned_rate_limits": { 00:08:55.842 "rw_ios_per_sec": 0, 00:08:55.842 "rw_mbytes_per_sec": 0, 00:08:55.842 "r_mbytes_per_sec": 0, 00:08:55.842 "w_mbytes_per_sec": 0 00:08:55.842 }, 00:08:55.842 "claimed": false, 00:08:55.842 "zoned": false, 00:08:55.842 "supported_io_types": { 00:08:55.842 "read": true, 00:08:55.842 "write": true, 00:08:55.842 "unmap": true, 00:08:55.842 "flush": true, 00:08:55.842 "reset": true, 00:08:55.842 "nvme_admin": true, 00:08:55.842 "nvme_io": true, 00:08:55.842 "nvme_io_md": false, 00:08:55.842 "write_zeroes": true, 00:08:55.842 "zcopy": false, 00:08:55.842 "get_zone_info": false, 00:08:55.842 "zone_management": false, 00:08:55.842 "zone_append": false, 00:08:55.842 "compare": true, 00:08:55.842 "compare_and_write": true, 00:08:55.842 "abort": true, 00:08:55.842 "seek_hole": false, 00:08:55.842 "seek_data": false, 00:08:55.842 "copy": true, 00:08:55.842 "nvme_iov_md": false 00:08:55.842 }, 00:08:55.842 "memory_domains": [ 00:08:55.842 { 00:08:55.842 "dma_device_id": "system", 00:08:55.842 "dma_device_type": 1 00:08:55.842 } 00:08:55.842 ], 00:08:55.842 "driver_specific": { 00:08:55.842 "nvme": [ 00:08:55.842 { 00:08:55.842 "trid": { 00:08:55.842 "trtype": "TCP", 00:08:55.842 "adrfam": "IPv4", 00:08:55.842 "traddr": "10.0.0.2", 00:08:55.842 "trsvcid": "4420", 00:08:55.842 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:55.842 }, 00:08:55.842 "ctrlr_data": { 00:08:55.842 "cntlid": 1, 00:08:55.842 "vendor_id": "0x8086", 00:08:55.842 "model_number": "SPDK bdev Controller", 00:08:55.842 "serial_number": "SPDK0", 00:08:55.842 "firmware_revision": "24.09", 00:08:55.842 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:55.842 "oacs": { 00:08:55.842 "security": 0, 00:08:55.842 "format": 0, 00:08:55.842 "firmware": 0, 00:08:55.842 "ns_manage": 0 00:08:55.842 }, 00:08:55.842 "multi_ctrlr": true, 00:08:55.842 "ana_reporting": false 00:08:55.842 }, 00:08:55.842 "vs": { 00:08:55.842 "nvme_version": "1.3" 00:08:55.842 }, 00:08:55.842 "ns_data": { 00:08:55.842 "id": 1, 00:08:55.842 "can_share": true 00:08:55.842 } 00:08:55.842 } 00:08:55.842 ], 00:08:55.842 "mp_policy": "active_passive" 00:08:55.842 } 00:08:55.842 } 00:08:55.842 ] 00:08:55.842 19:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:55.842 19:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3518351 00:08:56.103 19:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:56.103 Running I/O for 10 seconds... 00:08:57.045 Latency(us) 00:08:57.045 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.045 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.045 Nvme0n1 : 1.00 18035.00 70.45 0.00 0.00 0.00 0.00 0.00 00:08:57.045 =================================================================================================================== 00:08:57.045 Total : 18035.00 70.45 0.00 0.00 0.00 0.00 0.00 00:08:57.045 00:08:57.988 19:48:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 07c43812-503b-4b58-9894-ae3c967531b3 00:08:57.988 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.988 Nvme0n1 : 2.00 18148.50 70.89 0.00 0.00 0.00 0.00 0.00 00:08:57.988 =================================================================================================================== 00:08:57.988 Total : 18148.50 70.89 0.00 0.00 0.00 0.00 0.00 00:08:57.988 00:08:58.249 true 00:08:58.249 19:48:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 07c43812-503b-4b58-9894-ae3c967531b3 00:08:58.249 19:48:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:58.249 19:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:58.249 19:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:58.249 19:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3518351 00:08:59.192 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.192 Nvme0n1 : 3.00 18177.33 71.01 0.00 0.00 0.00 0.00 0.00 00:08:59.192 =================================================================================================================== 00:08:59.192 Total : 18177.33 71.01 0.00 0.00 0.00 0.00 0.00 00:08:59.192 00:09:00.135 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.135 Nvme0n1 : 4.00 18226.25 71.20 0.00 0.00 0.00 0.00 0.00 00:09:00.135 =================================================================================================================== 00:09:00.135 Total : 18226.25 71.20 0.00 0.00 0.00 0.00 0.00 00:09:00.135 00:09:01.078 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.078 Nvme0n1 : 5.00 18250.20 71.29 0.00 0.00 0.00 0.00 0.00 00:09:01.078 =================================================================================================================== 00:09:01.078 Total : 18250.20 71.29 0.00 0.00 0.00 0.00 0.00 00:09:01.078 00:09:02.021 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.021 Nvme0n1 : 6.00 18264.83 71.35 0.00 0.00 0.00 0.00 0.00 00:09:02.021 =================================================================================================================== 00:09:02.021 Total : 18264.83 71.35 0.00 0.00 0.00 0.00 0.00 00:09:02.021 00:09:02.964 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.964 Nvme0n1 : 7.00 18283.86 71.42 0.00 0.00 0.00 0.00 0.00 00:09:02.964 =================================================================================================================== 00:09:02.964 Total : 18283.86 71.42 0.00 0.00 0.00 0.00 0.00 00:09:02.964 00:09:04.351 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.351 Nvme0n1 : 8.00 18297.12 71.47 0.00 0.00 0.00 0.00 0.00 00:09:04.351 =================================================================================================================== 00:09:04.351 Total : 18297.12 71.47 0.00 0.00 0.00 0.00 0.00 00:09:04.351 00:09:05.295 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.295 Nvme0n1 : 9.00 18302.56 71.49 0.00 0.00 0.00 0.00 0.00 00:09:05.295 =================================================================================================================== 00:09:05.295 Total : 18302.56 71.49 0.00 0.00 0.00 0.00 0.00 00:09:05.295 00:09:06.237 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.237 Nvme0n1 : 10.00 18315.40 71.54 0.00 0.00 0.00 0.00 0.00 00:09:06.237 =================================================================================================================== 00:09:06.237 Total : 18315.40 71.54 0.00 0.00 0.00 0.00 0.00 00:09:06.237 00:09:06.237 00:09:06.237 Latency(us) 00:09:06.237 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:06.237 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.237 Nvme0n1 : 10.00 18319.75 71.56 0.00 0.00 6984.61 2826.24 12997.97 00:09:06.237 =================================================================================================================== 00:09:06.237 Total : 18319.75 71.56 0.00 0.00 6984.61 2826.24 12997.97 00:09:06.237 0 00:09:06.237 19:48:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3518024 00:09:06.238 19:48:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 3518024 ']' 00:09:06.238 19:48:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 3518024 00:09:06.238 19:48:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:09:06.238 19:48:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:06.238 19:48:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3518024 00:09:06.238 19:48:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:06.238 19:48:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:06.238 19:48:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3518024' 00:09:06.238 killing process with pid 3518024 00:09:06.238 19:48:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 3518024 00:09:06.238 Received shutdown signal, test time was about 10.000000 seconds 00:09:06.238 00:09:06.238 Latency(us) 00:09:06.238 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:06.238 =================================================================================================================== 00:09:06.238 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:06.238 19:48:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 3518024 00:09:06.238 19:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:06.499 19:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:06.499 19:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 07c43812-503b-4b58-9894-ae3c967531b3 00:09:06.499 19:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:06.760 19:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:06.760 19:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:06.760 19:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3514362 00:09:06.760 19:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3514362 00:09:06.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3514362 Killed "${NVMF_APP[@]}" "$@" 00:09:06.760 19:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:06.760 19:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:06.760 19:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:06.760 19:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:06.760 19:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:06.760 19:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3520524 00:09:06.760 19:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3520524 00:09:06.760 19:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:06.760 19:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3520524 ']' 00:09:06.760 19:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.760 19:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:06.760 19:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.760 19:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:06.760 19:48:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:07.021 [2024-07-24 19:48:54.728238] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:09:07.021 [2024-07-24 19:48:54.728299] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.021 EAL: No free 2048 kB hugepages reported on node 1 00:09:07.021 [2024-07-24 19:48:54.795471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.021 [2024-07-24 19:48:54.864311] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:07.021 [2024-07-24 19:48:54.864349] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:07.021 [2024-07-24 19:48:54.864357] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:07.021 [2024-07-24 19:48:54.864364] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:07.021 [2024-07-24 19:48:54.864370] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:07.021 [2024-07-24 19:48:54.864389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.592 19:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:07.592 19:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:07.592 19:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:07.592 19:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:07.592 19:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:07.592 19:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:07.592 19:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:07.853 [2024-07-24 19:48:55.657608] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:07.853 [2024-07-24 19:48:55.657694] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:07.853 [2024-07-24 19:48:55.657726] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:07.853 19:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:07.853 19:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 11e581a9-47fa-4318-8747-9677eaf027b1 00:09:07.853 19:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=11e581a9-47fa-4318-8747-9677eaf027b1 00:09:07.853 19:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:07.853 19:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:07.853 19:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:07.853 19:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:07.853 19:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:08.114 19:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 11e581a9-47fa-4318-8747-9677eaf027b1 -t 2000 00:09:08.114 [ 00:09:08.114 { 00:09:08.114 "name": "11e581a9-47fa-4318-8747-9677eaf027b1", 00:09:08.114 "aliases": [ 00:09:08.114 "lvs/lvol" 00:09:08.114 ], 00:09:08.114 "product_name": "Logical Volume", 00:09:08.114 "block_size": 4096, 00:09:08.114 "num_blocks": 38912, 00:09:08.114 "uuid": "11e581a9-47fa-4318-8747-9677eaf027b1", 00:09:08.114 "assigned_rate_limits": { 00:09:08.114 "rw_ios_per_sec": 0, 00:09:08.114 "rw_mbytes_per_sec": 0, 00:09:08.114 "r_mbytes_per_sec": 0, 00:09:08.114 "w_mbytes_per_sec": 0 00:09:08.114 }, 00:09:08.114 "claimed": false, 00:09:08.114 "zoned": false, 00:09:08.114 "supported_io_types": { 00:09:08.114 "read": true, 00:09:08.114 "write": true, 00:09:08.114 "unmap": true, 00:09:08.114 "flush": false, 00:09:08.114 "reset": true, 00:09:08.114 "nvme_admin": false, 00:09:08.114 "nvme_io": false, 00:09:08.114 "nvme_io_md": false, 00:09:08.114 "write_zeroes": true, 00:09:08.114 "zcopy": false, 00:09:08.114 "get_zone_info": false, 00:09:08.114 "zone_management": false, 00:09:08.114 "zone_append": false, 00:09:08.115 "compare": false, 00:09:08.115 "compare_and_write": false, 00:09:08.115 "abort": false, 00:09:08.115 "seek_hole": true, 00:09:08.115 "seek_data": true, 00:09:08.115 "copy": false, 00:09:08.115 "nvme_iov_md": false 00:09:08.115 }, 00:09:08.115 "driver_specific": { 00:09:08.115 "lvol": { 00:09:08.115 "lvol_store_uuid": "07c43812-503b-4b58-9894-ae3c967531b3", 00:09:08.115 "base_bdev": "aio_bdev", 00:09:08.115 "thin_provision": false, 00:09:08.115 "num_allocated_clusters": 38, 00:09:08.115 "snapshot": false, 00:09:08.115 "clone": false, 00:09:08.115 "esnap_clone": false 00:09:08.115 } 00:09:08.115 } 00:09:08.115 } 00:09:08.115 ] 00:09:08.115 19:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:08.115 19:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 07c43812-503b-4b58-9894-ae3c967531b3 00:09:08.115 19:48:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:08.376 19:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:08.376 19:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 07c43812-503b-4b58-9894-ae3c967531b3 00:09:08.376 19:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:08.376 19:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:08.376 19:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:08.637 [2024-07-24 19:48:56.441551] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:08.637 19:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 07c43812-503b-4b58-9894-ae3c967531b3 00:09:08.637 19:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:08.637 19:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 07c43812-503b-4b58-9894-ae3c967531b3 00:09:08.637 19:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:08.637 19:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:08.637 19:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:08.637 19:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:08.637 19:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:08.637 19:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:08.637 19:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:08.637 19:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:08.637 19:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 07c43812-503b-4b58-9894-ae3c967531b3 00:09:08.899 request: 00:09:08.899 { 00:09:08.899 "uuid": "07c43812-503b-4b58-9894-ae3c967531b3", 00:09:08.899 "method": "bdev_lvol_get_lvstores", 00:09:08.899 "req_id": 1 00:09:08.899 } 00:09:08.899 Got JSON-RPC error response 00:09:08.899 response: 00:09:08.899 { 00:09:08.899 "code": -19, 00:09:08.899 "message": "No such device" 00:09:08.899 } 00:09:08.899 19:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:08.899 19:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:08.899 19:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:08.899 19:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:08.899 19:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:08.899 aio_bdev 00:09:08.899 19:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 11e581a9-47fa-4318-8747-9677eaf027b1 00:09:08.899 19:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=11e581a9-47fa-4318-8747-9677eaf027b1 00:09:08.899 19:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:08.899 19:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:08.899 19:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:08.899 19:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:08.899 19:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:09.159 19:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 11e581a9-47fa-4318-8747-9677eaf027b1 -t 2000 00:09:09.159 [ 00:09:09.159 { 00:09:09.159 "name": "11e581a9-47fa-4318-8747-9677eaf027b1", 00:09:09.159 "aliases": [ 00:09:09.159 "lvs/lvol" 00:09:09.159 ], 00:09:09.160 "product_name": "Logical Volume", 00:09:09.160 "block_size": 4096, 00:09:09.160 "num_blocks": 38912, 00:09:09.160 "uuid": "11e581a9-47fa-4318-8747-9677eaf027b1", 00:09:09.160 "assigned_rate_limits": { 00:09:09.160 "rw_ios_per_sec": 0, 00:09:09.160 "rw_mbytes_per_sec": 0, 00:09:09.160 "r_mbytes_per_sec": 0, 00:09:09.160 "w_mbytes_per_sec": 0 00:09:09.160 }, 00:09:09.160 "claimed": false, 00:09:09.160 "zoned": false, 00:09:09.160 "supported_io_types": { 00:09:09.160 "read": true, 00:09:09.160 "write": true, 00:09:09.160 "unmap": true, 00:09:09.160 "flush": false, 00:09:09.160 "reset": true, 00:09:09.160 "nvme_admin": false, 00:09:09.160 "nvme_io": false, 00:09:09.160 "nvme_io_md": false, 00:09:09.160 "write_zeroes": true, 00:09:09.160 "zcopy": false, 00:09:09.160 "get_zone_info": false, 00:09:09.160 "zone_management": false, 00:09:09.160 "zone_append": false, 00:09:09.160 "compare": false, 00:09:09.160 "compare_and_write": false, 00:09:09.160 "abort": false, 00:09:09.160 "seek_hole": true, 00:09:09.160 "seek_data": true, 00:09:09.160 "copy": false, 00:09:09.160 "nvme_iov_md": false 00:09:09.160 }, 00:09:09.160 "driver_specific": { 00:09:09.160 "lvol": { 00:09:09.160 "lvol_store_uuid": "07c43812-503b-4b58-9894-ae3c967531b3", 00:09:09.160 "base_bdev": "aio_bdev", 00:09:09.160 "thin_provision": false, 00:09:09.160 "num_allocated_clusters": 38, 00:09:09.160 "snapshot": false, 00:09:09.160 "clone": false, 00:09:09.160 "esnap_clone": false 00:09:09.160 } 00:09:09.160 } 00:09:09.160 } 00:09:09.160 ] 00:09:09.160 19:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:09.160 19:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 07c43812-503b-4b58-9894-ae3c967531b3 00:09:09.160 19:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:09.421 19:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:09.421 19:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 07c43812-503b-4b58-9894-ae3c967531b3 00:09:09.421 19:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:09.680 19:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:09.681 19:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 11e581a9-47fa-4318-8747-9677eaf027b1 00:09:09.681 19:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 07c43812-503b-4b58-9894-ae3c967531b3 00:09:09.941 19:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:09.941 19:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:10.202 00:09:10.202 real 0m16.913s 00:09:10.202 user 0m44.343s 00:09:10.202 sys 0m2.966s 00:09:10.202 19:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:10.202 19:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:10.202 ************************************ 00:09:10.202 END TEST lvs_grow_dirty 00:09:10.202 ************************************ 00:09:10.202 19:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:10.202 19:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:10.202 19:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:10.202 19:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:10.202 19:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:10.202 19:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:10.202 19:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:10.202 19:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:10.202 19:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:10.202 nvmf_trace.0 00:09:10.202 19:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:10.202 19:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:10.202 19:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:10.202 19:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:09:10.202 19:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:10.202 19:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:09:10.202 19:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:10.202 19:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:10.202 rmmod nvme_tcp 00:09:10.202 rmmod nvme_fabrics 00:09:10.202 rmmod nvme_keyring 00:09:10.202 19:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:10.202 19:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:09:10.202 19:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:09:10.202 19:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3520524 ']' 00:09:10.202 19:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3520524 00:09:10.202 19:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 3520524 ']' 00:09:10.202 19:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 3520524 00:09:10.202 19:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:10.202 19:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:10.202 19:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3520524 00:09:10.202 19:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:10.202 19:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:10.202 19:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3520524' 00:09:10.202 killing process with pid 3520524 00:09:10.202 19:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 3520524 00:09:10.202 19:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 3520524 00:09:10.463 19:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:10.463 19:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:10.463 19:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:10.463 19:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:10.463 19:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:10.463 19:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.463 19:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:10.463 19:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.381 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:12.381 00:09:12.381 real 0m43.340s 00:09:12.381 user 1m5.441s 00:09:12.381 sys 0m10.065s 00:09:12.381 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:12.381 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:12.381 ************************************ 00:09:12.381 END TEST nvmf_lvs_grow 00:09:12.381 ************************************ 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:12.643 ************************************ 00:09:12.643 START TEST nvmf_bdev_io_wait 00:09:12.643 ************************************ 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:12.643 * Looking for test storage... 00:09:12.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:09:12.643 19:49:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:20.790 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:20.790 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:20.790 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:20.790 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:20.790 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:20.791 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:20.791 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:20.791 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:20.791 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:20.791 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:20.791 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:20.791 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:20.791 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:20.791 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:20.791 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:20.791 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:20.791 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:20.791 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:20.791 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:20.791 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:20.791 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:09:20.791 00:09:20.791 --- 10.0.0.2 ping statistics --- 00:09:20.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.791 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:09:20.791 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:20.791 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:20.791 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.399 ms 00:09:20.791 00:09:20.791 --- 10.0.0.1 ping statistics --- 00:09:20.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.791 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:09:20.791 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:20.791 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:09:20.791 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:20.791 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:20.791 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:20.791 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:20.791 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:20.791 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:20.791 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:20.791 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:20.791 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:20.791 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:20.791 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:20.791 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3525434 00:09:20.791 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3525434 00:09:20.791 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:20.791 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 3525434 ']' 00:09:20.791 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.791 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:20.791 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.791 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:20.791 19:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:20.791 [2024-07-24 19:49:07.680738] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:09:20.791 [2024-07-24 19:49:07.680802] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.791 EAL: No free 2048 kB hugepages reported on node 1 00:09:20.791 [2024-07-24 19:49:07.751245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:20.791 [2024-07-24 19:49:07.827316] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:20.791 [2024-07-24 19:49:07.827355] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:20.791 [2024-07-24 19:49:07.827363] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:20.791 [2024-07-24 19:49:07.827369] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:20.791 [2024-07-24 19:49:07.827375] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:20.791 [2024-07-24 19:49:07.827517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.791 [2024-07-24 19:49:07.827636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:20.791 [2024-07-24 19:49:07.827793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.791 [2024-07-24 19:49:07.827794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:20.791 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:20.791 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:09:20.791 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:20.791 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:20.791 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:20.791 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:20.791 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:20.791 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.791 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:20.791 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.791 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:20.791 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.791 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:20.791 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.791 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:20.791 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.791 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:20.791 [2024-07-24 19:49:08.567675] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:20.791 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.791 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:20.791 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.791 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:20.791 Malloc0 00:09:20.791 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.791 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:20.791 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.791 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:20.791 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.791 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:20.791 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.791 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:20.791 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.791 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:20.791 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.791 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:20.791 [2024-07-24 19:49:08.635326] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.791 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.791 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3525731 00:09:20.791 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3525734 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:20.792 { 00:09:20.792 "params": { 00:09:20.792 "name": "Nvme$subsystem", 00:09:20.792 "trtype": "$TEST_TRANSPORT", 00:09:20.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:20.792 "adrfam": "ipv4", 00:09:20.792 "trsvcid": "$NVMF_PORT", 00:09:20.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:20.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:20.792 "hdgst": ${hdgst:-false}, 00:09:20.792 "ddgst": ${ddgst:-false} 00:09:20.792 }, 00:09:20.792 "method": "bdev_nvme_attach_controller" 00:09:20.792 } 00:09:20.792 EOF 00:09:20.792 )") 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3525736 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:20.792 { 00:09:20.792 "params": { 00:09:20.792 "name": "Nvme$subsystem", 00:09:20.792 "trtype": "$TEST_TRANSPORT", 00:09:20.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:20.792 "adrfam": "ipv4", 00:09:20.792 "trsvcid": "$NVMF_PORT", 00:09:20.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:20.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:20.792 "hdgst": ${hdgst:-false}, 00:09:20.792 "ddgst": ${ddgst:-false} 00:09:20.792 }, 00:09:20.792 "method": "bdev_nvme_attach_controller" 00:09:20.792 } 00:09:20.792 EOF 00:09:20.792 )") 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3525740 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:20.792 { 00:09:20.792 "params": { 00:09:20.792 "name": "Nvme$subsystem", 00:09:20.792 "trtype": "$TEST_TRANSPORT", 00:09:20.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:20.792 "adrfam": "ipv4", 00:09:20.792 "trsvcid": "$NVMF_PORT", 00:09:20.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:20.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:20.792 "hdgst": ${hdgst:-false}, 00:09:20.792 "ddgst": ${ddgst:-false} 00:09:20.792 }, 00:09:20.792 "method": "bdev_nvme_attach_controller" 00:09:20.792 } 00:09:20.792 EOF 00:09:20.792 )") 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:20.792 { 00:09:20.792 "params": { 00:09:20.792 "name": "Nvme$subsystem", 00:09:20.792 "trtype": "$TEST_TRANSPORT", 00:09:20.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:20.792 "adrfam": "ipv4", 00:09:20.792 "trsvcid": "$NVMF_PORT", 00:09:20.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:20.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:20.792 "hdgst": ${hdgst:-false}, 00:09:20.792 "ddgst": ${ddgst:-false} 00:09:20.792 }, 00:09:20.792 "method": "bdev_nvme_attach_controller" 00:09:20.792 } 00:09:20.792 EOF 00:09:20.792 )") 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3525731 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:20.792 "params": { 00:09:20.792 "name": "Nvme1", 00:09:20.792 "trtype": "tcp", 00:09:20.792 "traddr": "10.0.0.2", 00:09:20.792 "adrfam": "ipv4", 00:09:20.792 "trsvcid": "4420", 00:09:20.792 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:20.792 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:20.792 "hdgst": false, 00:09:20.792 "ddgst": false 00:09:20.792 }, 00:09:20.792 "method": "bdev_nvme_attach_controller" 00:09:20.792 }' 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:20.792 "params": { 00:09:20.792 "name": "Nvme1", 00:09:20.792 "trtype": "tcp", 00:09:20.792 "traddr": "10.0.0.2", 00:09:20.792 "adrfam": "ipv4", 00:09:20.792 "trsvcid": "4420", 00:09:20.792 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:20.792 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:20.792 "hdgst": false, 00:09:20.792 "ddgst": false 00:09:20.792 }, 00:09:20.792 "method": "bdev_nvme_attach_controller" 00:09:20.792 }' 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:20.792 "params": { 00:09:20.792 "name": "Nvme1", 00:09:20.792 "trtype": "tcp", 00:09:20.792 "traddr": "10.0.0.2", 00:09:20.792 "adrfam": "ipv4", 00:09:20.792 "trsvcid": "4420", 00:09:20.792 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:20.792 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:20.792 "hdgst": false, 00:09:20.792 "ddgst": false 00:09:20.792 }, 00:09:20.792 "method": "bdev_nvme_attach_controller" 00:09:20.792 }' 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:20.792 19:49:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:20.792 "params": { 00:09:20.792 "name": "Nvme1", 00:09:20.792 "trtype": "tcp", 00:09:20.792 "traddr": "10.0.0.2", 00:09:20.792 "adrfam": "ipv4", 00:09:20.793 "trsvcid": "4420", 00:09:20.793 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:20.793 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:20.793 "hdgst": false, 00:09:20.793 "ddgst": false 00:09:20.793 }, 00:09:20.793 "method": "bdev_nvme_attach_controller" 00:09:20.793 }' 00:09:20.793 [2024-07-24 19:49:08.689431] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:09:20.793 [2024-07-24 19:49:08.689480] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:20.793 [2024-07-24 19:49:08.693944] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:09:20.793 [2024-07-24 19:49:08.693990] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:20.793 [2024-07-24 19:49:08.699074] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:09:20.793 [2024-07-24 19:49:08.699122] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:20.793 [2024-07-24 19:49:08.699235] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:09:20.793 [2024-07-24 19:49:08.699299] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:20.793 EAL: No free 2048 kB hugepages reported on node 1 00:09:21.055 EAL: No free 2048 kB hugepages reported on node 1 00:09:21.055 EAL: No free 2048 kB hugepages reported on node 1 00:09:21.055 [2024-07-24 19:49:08.815642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.055 [2024-07-24 19:49:08.856275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.055 EAL: No free 2048 kB hugepages reported on node 1 00:09:21.055 [2024-07-24 19:49:08.865664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:21.055 [2024-07-24 19:49:08.900738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.055 [2024-07-24 19:49:08.906823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:09:21.055 [2024-07-24 19:49:08.950605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:21.055 [2024-07-24 19:49:08.962746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.316 [2024-07-24 19:49:09.014698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:21.316 Running I/O for 1 seconds... 00:09:21.316 Running I/O for 1 seconds... 00:09:21.316 Running I/O for 1 seconds... 00:09:21.576 Running I/O for 1 seconds... 00:09:22.252 00:09:22.252 Latency(us) 00:09:22.252 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:22.252 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:22.253 Nvme1n1 : 1.00 187172.10 731.14 0.00 0.00 681.54 269.65 1406.29 00:09:22.253 =================================================================================================================== 00:09:22.253 Total : 187172.10 731.14 0.00 0.00 681.54 269.65 1406.29 00:09:22.253 00:09:22.253 Latency(us) 00:09:22.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:22.253 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:22.253 Nvme1n1 : 1.06 8901.32 34.77 0.00 0.00 13713.06 6526.29 56360.96 00:09:22.253 =================================================================================================================== 00:09:22.253 Total : 8901.32 34.77 0.00 0.00 13713.06 6526.29 56360.96 00:09:22.513 00:09:22.513 Latency(us) 00:09:22.513 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:22.513 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:22.513 Nvme1n1 : 1.00 20642.02 80.63 0.00 0.00 6186.56 3386.03 14964.05 00:09:22.513 =================================================================================================================== 00:09:22.513 Total : 20642.02 80.63 0.00 0.00 6186.56 3386.03 14964.05 00:09:22.513 00:09:22.513 Latency(us) 00:09:22.513 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:22.513 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:22.514 Nvme1n1 : 1.00 9270.44 36.21 0.00 0.00 13769.06 4314.45 33423.36 00:09:22.514 =================================================================================================================== 00:09:22.514 Total : 9270.44 36.21 0.00 0.00 13769.06 4314.45 33423.36 00:09:22.775 19:49:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3525734 00:09:22.775 19:49:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3525736 00:09:22.775 19:49:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3525740 00:09:22.775 19:49:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:22.775 19:49:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.775 19:49:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:22.775 19:49:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.775 19:49:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:22.775 19:49:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:22.775 19:49:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:22.775 19:49:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:09:22.775 19:49:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:22.775 19:49:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:09:22.775 19:49:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:22.775 19:49:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:22.775 rmmod nvme_tcp 00:09:22.775 rmmod nvme_fabrics 00:09:22.775 rmmod nvme_keyring 00:09:22.775 19:49:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:22.775 19:49:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:09:22.775 19:49:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:09:22.775 19:49:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3525434 ']' 00:09:22.775 19:49:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3525434 00:09:22.775 19:49:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 3525434 ']' 00:09:22.775 19:49:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 3525434 00:09:22.775 19:49:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:22.775 19:49:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:22.775 19:49:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3525434 00:09:22.775 19:49:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:22.775 19:49:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:22.775 19:49:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3525434' 00:09:22.775 killing process with pid 3525434 00:09:22.775 19:49:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 3525434 00:09:22.775 19:49:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 3525434 00:09:23.036 19:49:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:23.036 19:49:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:23.036 19:49:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:23.036 19:49:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:23.036 19:49:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:23.036 19:49:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.036 19:49:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:23.036 19:49:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.951 19:49:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:24.951 00:09:24.951 real 0m12.405s 00:09:24.951 user 0m18.560s 00:09:24.951 sys 0m6.824s 00:09:24.951 19:49:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:24.951 19:49:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.951 ************************************ 00:09:24.951 END TEST nvmf_bdev_io_wait 00:09:24.951 ************************************ 00:09:24.951 19:49:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:24.951 19:49:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:24.951 19:49:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:24.951 19:49:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:24.951 ************************************ 00:09:24.951 START TEST nvmf_queue_depth 00:09:24.951 ************************************ 00:09:24.951 19:49:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:25.213 * Looking for test storage... 00:09:25.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.213 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.214 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:25.214 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:25.214 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:09:25.214 19:49:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:33.362 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:33.362 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:33.362 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:33.362 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:33.362 19:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:33.363 19:49:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:33.363 19:49:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:33.363 19:49:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:33.363 19:49:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:33.363 19:49:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:33.363 19:49:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:33.363 19:49:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:33.363 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:33.363 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:09:33.363 00:09:33.363 --- 10.0.0.2 ping statistics --- 00:09:33.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.363 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:09:33.363 19:49:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:33.363 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:33.363 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.350 ms 00:09:33.363 00:09:33.363 --- 10.0.0.1 ping statistics --- 00:09:33.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.363 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:09:33.363 19:49:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:33.363 19:49:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:09:33.363 19:49:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:33.363 19:49:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:33.363 19:49:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:33.363 19:49:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:33.363 19:49:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:33.363 19:49:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:33.363 19:49:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:33.363 19:49:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:33.363 19:49:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:33.363 19:49:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:33.363 19:49:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:33.363 19:49:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3530182 00:09:33.363 19:49:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3530182 00:09:33.363 19:49:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:33.363 19:49:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3530182 ']' 00:09:33.363 19:49:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.363 19:49:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:33.363 19:49:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.363 19:49:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:33.363 19:49:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:33.363 [2024-07-24 19:49:20.307255] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:09:33.363 [2024-07-24 19:49:20.307306] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.363 EAL: No free 2048 kB hugepages reported on node 1 00:09:33.363 [2024-07-24 19:49:20.392456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.363 [2024-07-24 19:49:20.481979] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:33.363 [2024-07-24 19:49:20.482043] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:33.363 [2024-07-24 19:49:20.482051] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:33.363 [2024-07-24 19:49:20.482058] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:33.363 [2024-07-24 19:49:20.482064] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:33.363 [2024-07-24 19:49:20.482095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.363 19:49:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:33.363 19:49:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:33.363 19:49:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:33.363 19:49:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:33.363 19:49:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:33.363 19:49:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:33.363 19:49:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:33.363 19:49:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.363 19:49:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:33.363 [2024-07-24 19:49:21.134582] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:33.363 19:49:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.363 19:49:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:33.363 19:49:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.363 19:49:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:33.363 Malloc0 00:09:33.363 19:49:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.363 19:49:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:33.363 19:49:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.363 19:49:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:33.363 19:49:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.363 19:49:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:33.363 19:49:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.363 19:49:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:33.363 19:49:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.363 19:49:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:33.363 19:49:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.363 19:49:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:33.363 [2024-07-24 19:49:21.206394] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:33.363 19:49:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.363 19:49:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3530504 00:09:33.363 19:49:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:33.363 19:49:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:33.363 19:49:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3530504 /var/tmp/bdevperf.sock 00:09:33.363 19:49:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3530504 ']' 00:09:33.363 19:49:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:33.363 19:49:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:33.363 19:49:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:33.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:33.363 19:49:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:33.363 19:49:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:33.363 [2024-07-24 19:49:21.260975] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:09:33.363 [2024-07-24 19:49:21.261035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3530504 ] 00:09:33.363 EAL: No free 2048 kB hugepages reported on node 1 00:09:33.624 [2024-07-24 19:49:21.321425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.624 [2024-07-24 19:49:21.387167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.196 19:49:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:34.196 19:49:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:34.196 19:49:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:34.196 19:49:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.196 19:49:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:34.458 NVMe0n1 00:09:34.458 19:49:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.458 19:49:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:34.458 Running I/O for 10 seconds... 00:09:44.463 00:09:44.463 Latency(us) 00:09:44.463 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:44.463 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:44.463 Verification LBA range: start 0x0 length 0x4000 00:09:44.463 NVMe0n1 : 10.06 11558.52 45.15 0.00 0.00 88241.54 18459.31 71215.79 00:09:44.463 =================================================================================================================== 00:09:44.463 Total : 11558.52 45.15 0.00 0.00 88241.54 18459.31 71215.79 00:09:44.463 0 00:09:44.463 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3530504 00:09:44.463 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3530504 ']' 00:09:44.463 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3530504 00:09:44.463 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:44.463 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:44.463 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3530504 00:09:44.724 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:44.724 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:44.724 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3530504' 00:09:44.724 killing process with pid 3530504 00:09:44.724 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3530504 00:09:44.724 Received shutdown signal, test time was about 10.000000 seconds 00:09:44.724 00:09:44.724 Latency(us) 00:09:44.724 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:44.724 =================================================================================================================== 00:09:44.724 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:44.724 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3530504 00:09:44.724 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:44.724 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:44.724 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:44.724 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:09:44.724 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:44.724 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:09:44.724 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:44.724 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:44.724 rmmod nvme_tcp 00:09:44.724 rmmod nvme_fabrics 00:09:44.724 rmmod nvme_keyring 00:09:44.724 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:44.724 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:09:44.724 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:09:44.724 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3530182 ']' 00:09:44.724 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3530182 00:09:44.724 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3530182 ']' 00:09:44.724 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3530182 00:09:44.724 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:44.724 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:44.724 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3530182 00:09:44.985 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:44.985 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:44.985 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3530182' 00:09:44.985 killing process with pid 3530182 00:09:44.985 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3530182 00:09:44.985 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3530182 00:09:44.985 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:44.985 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:44.985 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:44.985 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:44.985 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:44.985 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.985 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.985 19:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.532 19:49:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:47.533 00:09:47.533 real 0m22.007s 00:09:47.533 user 0m25.519s 00:09:47.533 sys 0m6.599s 00:09:47.533 19:49:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:47.533 19:49:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:47.533 ************************************ 00:09:47.533 END TEST nvmf_queue_depth 00:09:47.533 ************************************ 00:09:47.533 19:49:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:47.533 19:49:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:47.533 19:49:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:47.533 19:49:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:47.533 ************************************ 00:09:47.533 START TEST nvmf_target_multipath 00:09:47.533 ************************************ 00:09:47.533 19:49:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:47.533 * Looking for test storage... 00:09:47.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:09:47.533 19:49:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:54.124 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:54.124 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:09:54.124 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:54.124 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:54.124 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:54.124 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:54.124 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:54.124 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:09:54.124 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:54.124 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:09:54.124 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:09:54.124 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:09:54.124 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:09:54.124 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:09:54.124 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:09:54.124 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:54.124 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:54.124 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:54.124 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:54.124 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:54.124 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:54.124 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:54.125 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:54.125 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:54.125 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:54.125 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:54.125 19:49:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:54.387 19:49:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:54.387 19:49:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:54.387 19:49:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:54.387 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:54.387 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.559 ms 00:09:54.387 00:09:54.387 --- 10.0.0.2 ping statistics --- 00:09:54.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.387 rtt min/avg/max/mdev = 0.559/0.559/0.559/0.000 ms 00:09:54.387 19:49:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:54.387 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:54.387 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.413 ms 00:09:54.387 00:09:54.387 --- 10.0.0.1 ping statistics --- 00:09:54.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.387 rtt min/avg/max/mdev = 0.413/0.413/0.413/0.000 ms 00:09:54.387 19:49:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:54.387 19:49:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:09:54.387 19:49:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:54.387 19:49:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:54.387 19:49:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:54.387 19:49:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:54.387 19:49:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:54.387 19:49:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:54.387 19:49:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:54.387 19:49:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:54.387 19:49:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:54.387 only one NIC for nvmf test 00:09:54.387 19:49:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:54.387 19:49:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:54.387 19:49:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:54.387 19:49:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:54.387 19:49:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:54.387 19:49:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:54.387 19:49:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:54.387 rmmod nvme_tcp 00:09:54.387 rmmod nvme_fabrics 00:09:54.387 rmmod nvme_keyring 00:09:54.387 19:49:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:54.387 19:49:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:54.387 19:49:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:54.387 19:49:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:54.387 19:49:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:54.387 19:49:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:54.387 19:49:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:54.387 19:49:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:54.387 19:49:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:54.387 19:49:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.387 19:49:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:54.387 19:49:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:56.938 00:09:56.938 real 0m9.371s 00:09:56.938 user 0m2.000s 00:09:56.938 sys 0m5.284s 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:56.938 ************************************ 00:09:56.938 END TEST nvmf_target_multipath 00:09:56.938 ************************************ 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:56.938 ************************************ 00:09:56.938 START TEST nvmf_zcopy 00:09:56.938 ************************************ 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:56.938 * Looking for test storage... 00:09:56.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.938 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:56.939 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.939 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:56.939 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:56.939 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:09:56.939 19:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:03.534 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:03.534 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.534 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:03.535 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.535 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:03.535 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:03.535 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.535 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:03.535 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:03.535 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.535 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:03.535 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.535 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:03.535 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.535 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:03.535 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:03.535 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.535 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:03.535 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:03.535 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.535 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:03.535 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:10:03.535 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:03.535 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:03.535 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:03.535 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:03.535 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:03.535 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:03.535 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:03.535 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:03.535 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:03.535 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:03.535 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:03.535 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:03.535 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:03.535 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:03.535 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:03.535 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:03.535 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:03.535 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:03.797 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:03.797 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:03.797 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:03.797 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:03.797 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:03.797 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:03.797 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.713 ms 00:10:03.797 00:10:03.797 --- 10.0.0.2 ping statistics --- 00:10:03.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.797 rtt min/avg/max/mdev = 0.713/0.713/0.713/0.000 ms 00:10:03.797 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:03.797 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:03.797 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.446 ms 00:10:03.797 00:10:03.797 --- 10.0.0.1 ping statistics --- 00:10:03.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.797 rtt min/avg/max/mdev = 0.446/0.446/0.446/0.000 ms 00:10:03.797 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:03.797 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:10:03.797 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:03.797 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:03.797 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:03.797 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:03.797 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:03.797 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:03.797 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:03.797 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:03.797 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:03.797 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:03.797 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:03.797 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3541110 00:10:03.797 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3541110 00:10:03.797 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:03.797 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 3541110 ']' 00:10:03.797 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.798 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:03.798 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.798 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:03.798 19:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.059 [2024-07-24 19:49:51.752224] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:10:04.059 [2024-07-24 19:49:51.752283] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.059 EAL: No free 2048 kB hugepages reported on node 1 00:10:04.059 [2024-07-24 19:49:51.840915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.059 [2024-07-24 19:49:51.932606] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:04.059 [2024-07-24 19:49:51.932670] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:04.059 [2024-07-24 19:49:51.932678] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:04.059 [2024-07-24 19:49:51.932685] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:04.059 [2024-07-24 19:49:51.932691] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:04.059 [2024-07-24 19:49:51.932719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:04.633 19:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:04.633 19:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:10:04.633 19:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:04.633 19:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:04.633 19:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.633 19:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:04.633 19:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:04.633 19:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:04.633 19:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.633 19:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.895 [2024-07-24 19:49:52.588601] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:04.895 19:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.895 19:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:04.895 19:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.895 19:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.895 19:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.895 19:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:04.895 19:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.895 19:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.895 [2024-07-24 19:49:52.612858] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:04.895 19:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.895 19:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:04.895 19:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.895 19:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.895 19:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.895 19:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:04.895 19:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.895 19:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.895 malloc0 00:10:04.895 19:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.895 19:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:04.895 19:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.895 19:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.895 19:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.895 19:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:04.895 19:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:04.895 19:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:04.895 19:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:04.895 19:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:04.895 19:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:04.895 { 00:10:04.895 "params": { 00:10:04.895 "name": "Nvme$subsystem", 00:10:04.895 "trtype": "$TEST_TRANSPORT", 00:10:04.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:04.895 "adrfam": "ipv4", 00:10:04.895 "trsvcid": "$NVMF_PORT", 00:10:04.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:04.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:04.895 "hdgst": ${hdgst:-false}, 00:10:04.895 "ddgst": ${ddgst:-false} 00:10:04.895 }, 00:10:04.895 "method": "bdev_nvme_attach_controller" 00:10:04.895 } 00:10:04.895 EOF 00:10:04.895 )") 00:10:04.895 19:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:04.895 19:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:04.895 19:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:04.895 19:49:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:04.895 "params": { 00:10:04.895 "name": "Nvme1", 00:10:04.895 "trtype": "tcp", 00:10:04.895 "traddr": "10.0.0.2", 00:10:04.895 "adrfam": "ipv4", 00:10:04.895 "trsvcid": "4420", 00:10:04.895 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:04.895 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:04.895 "hdgst": false, 00:10:04.895 "ddgst": false 00:10:04.895 }, 00:10:04.895 "method": "bdev_nvme_attach_controller" 00:10:04.895 }' 00:10:04.895 [2024-07-24 19:49:52.711949] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:10:04.895 [2024-07-24 19:49:52.712012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3541195 ] 00:10:04.895 EAL: No free 2048 kB hugepages reported on node 1 00:10:04.895 [2024-07-24 19:49:52.773120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.895 [2024-07-24 19:49:52.839747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.468 Running I/O for 10 seconds... 00:10:15.476 00:10:15.476 Latency(us) 00:10:15.476 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:15.476 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:15.476 Verification LBA range: start 0x0 length 0x1000 00:10:15.476 Nvme1n1 : 10.05 9420.06 73.59 0.00 0.00 13489.64 2908.16 43472.21 00:10:15.476 =================================================================================================================== 00:10:15.476 Total : 9420.06 73.59 0.00 0.00 13489.64 2908.16 43472.21 00:10:15.476 19:50:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3543390 00:10:15.476 19:50:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:15.476 19:50:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:15.476 19:50:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:15.476 19:50:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:15.476 19:50:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:15.476 19:50:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:15.476 19:50:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:15.476 { 00:10:15.476 "params": { 00:10:15.476 "name": "Nvme$subsystem", 00:10:15.476 "trtype": "$TEST_TRANSPORT", 00:10:15.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:15.476 "adrfam": "ipv4", 00:10:15.476 "trsvcid": "$NVMF_PORT", 00:10:15.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:15.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:15.476 "hdgst": ${hdgst:-false}, 00:10:15.476 "ddgst": ${ddgst:-false} 00:10:15.476 }, 00:10:15.476 "method": "bdev_nvme_attach_controller" 00:10:15.476 } 00:10:15.476 EOF 00:10:15.476 )") 00:10:15.476 19:50:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:15.476 19:50:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:15.476 [2024-07-24 19:50:03.359567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.476 [2024-07-24 19:50:03.359599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.476 19:50:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:15.476 19:50:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:15.476 19:50:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:15.476 "params": { 00:10:15.476 "name": "Nvme1", 00:10:15.476 "trtype": "tcp", 00:10:15.476 "traddr": "10.0.0.2", 00:10:15.476 "adrfam": "ipv4", 00:10:15.476 "trsvcid": "4420", 00:10:15.476 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:15.476 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:15.476 "hdgst": false, 00:10:15.476 "ddgst": false 00:10:15.476 }, 00:10:15.476 "method": "bdev_nvme_attach_controller" 00:10:15.477 }' 00:10:15.477 [2024-07-24 19:50:03.371562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.477 [2024-07-24 19:50:03.371570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.477 [2024-07-24 19:50:03.383592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.477 [2024-07-24 19:50:03.383603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.477 [2024-07-24 19:50:03.395623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.477 [2024-07-24 19:50:03.395630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.477 [2024-07-24 19:50:03.402009] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:10:15.477 [2024-07-24 19:50:03.402059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3543390 ] 00:10:15.477 [2024-07-24 19:50:03.407653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.477 [2024-07-24 19:50:03.407660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.477 [2024-07-24 19:50:03.419684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.477 [2024-07-24 19:50:03.419692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.477 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.738 [2024-07-24 19:50:03.431714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.738 [2024-07-24 19:50:03.431722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.738 [2024-07-24 19:50:03.443745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.738 [2024-07-24 19:50:03.443753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.738 [2024-07-24 19:50:03.455776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.738 [2024-07-24 19:50:03.455783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.738 [2024-07-24 19:50:03.459856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.738 [2024-07-24 19:50:03.467808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.738 [2024-07-24 19:50:03.467817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.738 [2024-07-24 19:50:03.479838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.738 [2024-07-24 19:50:03.479846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.738 [2024-07-24 19:50:03.491868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.738 [2024-07-24 19:50:03.491879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.738 [2024-07-24 19:50:03.503899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.738 [2024-07-24 19:50:03.503910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.738 [2024-07-24 19:50:03.515928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.738 [2024-07-24 19:50:03.515936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.738 [2024-07-24 19:50:03.525205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.738 [2024-07-24 19:50:03.527959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.738 [2024-07-24 19:50:03.527967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.738 [2024-07-24 19:50:03.539995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.738 [2024-07-24 19:50:03.540006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.738 [2024-07-24 19:50:03.552025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.738 [2024-07-24 19:50:03.552036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.738 [2024-07-24 19:50:03.564052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.738 [2024-07-24 19:50:03.564060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.738 [2024-07-24 19:50:03.576083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.738 [2024-07-24 19:50:03.576096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.738 [2024-07-24 19:50:03.588114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.738 [2024-07-24 19:50:03.588121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.738 [2024-07-24 19:50:03.600160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.738 [2024-07-24 19:50:03.600174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.738 [2024-07-24 19:50:03.612180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.738 [2024-07-24 19:50:03.612189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.738 [2024-07-24 19:50:03.624216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.738 [2024-07-24 19:50:03.624226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.738 [2024-07-24 19:50:03.636244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.738 [2024-07-24 19:50:03.636252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.738 [2024-07-24 19:50:03.648272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.738 [2024-07-24 19:50:03.648280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.738 [2024-07-24 19:50:03.660316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.738 [2024-07-24 19:50:03.660332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.738 Running I/O for 5 seconds... 00:10:15.738 [2024-07-24 19:50:03.672330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.738 [2024-07-24 19:50:03.672337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.738 [2024-07-24 19:50:03.691427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.738 [2024-07-24 19:50:03.691443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.999 [2024-07-24 19:50:03.703990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.999 [2024-07-24 19:50:03.704006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.999 [2024-07-24 19:50:03.717458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.999 [2024-07-24 19:50:03.717474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.999 [2024-07-24 19:50:03.730597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.999 [2024-07-24 19:50:03.730613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.999 [2024-07-24 19:50:03.743534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.999 [2024-07-24 19:50:03.743550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.999 [2024-07-24 19:50:03.756074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.999 [2024-07-24 19:50:03.756089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.000 [2024-07-24 19:50:03.769236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.000 [2024-07-24 19:50:03.769251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.000 [2024-07-24 19:50:03.782489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.000 [2024-07-24 19:50:03.782504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.000 [2024-07-24 19:50:03.795855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.000 [2024-07-24 19:50:03.795870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.000 [2024-07-24 19:50:03.809120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.000 [2024-07-24 19:50:03.809135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.000 [2024-07-24 19:50:03.822654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.000 [2024-07-24 19:50:03.822669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.000 [2024-07-24 19:50:03.835221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.000 [2024-07-24 19:50:03.835237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.000 [2024-07-24 19:50:03.848868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.000 [2024-07-24 19:50:03.848883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.000 [2024-07-24 19:50:03.861746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.000 [2024-07-24 19:50:03.861761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.000 [2024-07-24 19:50:03.874699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.000 [2024-07-24 19:50:03.874714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.000 [2024-07-24 19:50:03.887300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.000 [2024-07-24 19:50:03.887315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.000 [2024-07-24 19:50:03.900661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.000 [2024-07-24 19:50:03.900676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.000 [2024-07-24 19:50:03.914091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.000 [2024-07-24 19:50:03.914106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.000 [2024-07-24 19:50:03.927618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.000 [2024-07-24 19:50:03.927633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.000 [2024-07-24 19:50:03.940457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.000 [2024-07-24 19:50:03.940472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.264 [2024-07-24 19:50:03.954060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.264 [2024-07-24 19:50:03.954075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.264 [2024-07-24 19:50:03.966482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.264 [2024-07-24 19:50:03.966497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.264 [2024-07-24 19:50:03.979700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.264 [2024-07-24 19:50:03.979714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.264 [2024-07-24 19:50:03.993041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.264 [2024-07-24 19:50:03.993056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.264 [2024-07-24 19:50:04.006346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.264 [2024-07-24 19:50:04.006361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.264 [2024-07-24 19:50:04.018963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.264 [2024-07-24 19:50:04.018978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.264 [2024-07-24 19:50:04.032118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.264 [2024-07-24 19:50:04.032133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.265 [2024-07-24 19:50:04.045308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.265 [2024-07-24 19:50:04.045323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.265 [2024-07-24 19:50:04.058859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.265 [2024-07-24 19:50:04.058874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.265 [2024-07-24 19:50:04.072466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.265 [2024-07-24 19:50:04.072481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.265 [2024-07-24 19:50:04.085864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.265 [2024-07-24 19:50:04.085879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.265 [2024-07-24 19:50:04.099133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.265 [2024-07-24 19:50:04.099147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.265 [2024-07-24 19:50:04.112547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.265 [2024-07-24 19:50:04.112562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.265 [2024-07-24 19:50:04.125890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.265 [2024-07-24 19:50:04.125905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.265 [2024-07-24 19:50:04.139243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.265 [2024-07-24 19:50:04.139257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.265 [2024-07-24 19:50:04.152672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.265 [2024-07-24 19:50:04.152687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.265 [2024-07-24 19:50:04.165433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.265 [2024-07-24 19:50:04.165448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.265 [2024-07-24 19:50:04.178919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.265 [2024-07-24 19:50:04.178934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.265 [2024-07-24 19:50:04.192083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.265 [2024-07-24 19:50:04.192097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.265 [2024-07-24 19:50:04.204881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.265 [2024-07-24 19:50:04.204895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.265 [2024-07-24 19:50:04.218091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.265 [2024-07-24 19:50:04.218106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.554 [2024-07-24 19:50:04.231300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.554 [2024-07-24 19:50:04.231316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.554 [2024-07-24 19:50:04.244501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.554 [2024-07-24 19:50:04.244516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.554 [2024-07-24 19:50:04.257869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.554 [2024-07-24 19:50:04.257883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.554 [2024-07-24 19:50:04.271074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.554 [2024-07-24 19:50:04.271089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.554 [2024-07-24 19:50:04.284322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.554 [2024-07-24 19:50:04.284337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.554 [2024-07-24 19:50:04.297838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.554 [2024-07-24 19:50:04.297853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.554 [2024-07-24 19:50:04.310965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.554 [2024-07-24 19:50:04.310980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.554 [2024-07-24 19:50:04.323918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.554 [2024-07-24 19:50:04.323932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.554 [2024-07-24 19:50:04.336584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.554 [2024-07-24 19:50:04.336599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.554 [2024-07-24 19:50:04.348987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.554 [2024-07-24 19:50:04.349001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.554 [2024-07-24 19:50:04.362589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.554 [2024-07-24 19:50:04.362604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.554 [2024-07-24 19:50:04.375679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.554 [2024-07-24 19:50:04.375693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.554 [2024-07-24 19:50:04.389063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.554 [2024-07-24 19:50:04.389078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.554 [2024-07-24 19:50:04.402357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.554 [2024-07-24 19:50:04.402372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.554 [2024-07-24 19:50:04.415514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.554 [2024-07-24 19:50:04.415529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.554 [2024-07-24 19:50:04.428906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.555 [2024-07-24 19:50:04.428921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.555 [2024-07-24 19:50:04.441225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.555 [2024-07-24 19:50:04.441239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.555 [2024-07-24 19:50:04.453974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.555 [2024-07-24 19:50:04.453989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.555 [2024-07-24 19:50:04.467474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.555 [2024-07-24 19:50:04.467490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.555 [2024-07-24 19:50:04.479540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.555 [2024-07-24 19:50:04.479555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.555 [2024-07-24 19:50:04.492485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.555 [2024-07-24 19:50:04.492500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.850 [2024-07-24 19:50:04.505366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.850 [2024-07-24 19:50:04.505381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.850 [2024-07-24 19:50:04.518435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.850 [2024-07-24 19:50:04.518449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.850 [2024-07-24 19:50:04.531204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.850 [2024-07-24 19:50:04.531219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.850 [2024-07-24 19:50:04.544157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.850 [2024-07-24 19:50:04.544171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.850 [2024-07-24 19:50:04.557167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.850 [2024-07-24 19:50:04.557185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.850 [2024-07-24 19:50:04.570547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.850 [2024-07-24 19:50:04.570561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.850 [2024-07-24 19:50:04.582901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.850 [2024-07-24 19:50:04.582915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.850 [2024-07-24 19:50:04.596154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.850 [2024-07-24 19:50:04.596168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.850 [2024-07-24 19:50:04.609337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.850 [2024-07-24 19:50:04.609352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.850 [2024-07-24 19:50:04.622257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.850 [2024-07-24 19:50:04.622271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.850 [2024-07-24 19:50:04.634451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.850 [2024-07-24 19:50:04.634466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.850 [2024-07-24 19:50:04.647916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.850 [2024-07-24 19:50:04.647931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.850 [2024-07-24 19:50:04.661167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.850 [2024-07-24 19:50:04.661182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.850 [2024-07-24 19:50:04.674196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.850 [2024-07-24 19:50:04.674216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.850 [2024-07-24 19:50:04.686680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.850 [2024-07-24 19:50:04.686695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.850 [2024-07-24 19:50:04.699646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.850 [2024-07-24 19:50:04.699661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.850 [2024-07-24 19:50:04.713000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.850 [2024-07-24 19:50:04.713015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.850 [2024-07-24 19:50:04.726447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.850 [2024-07-24 19:50:04.726463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.850 [2024-07-24 19:50:04.739660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.850 [2024-07-24 19:50:04.739675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.850 [2024-07-24 19:50:04.752072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.851 [2024-07-24 19:50:04.752087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.851 [2024-07-24 19:50:04.764679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.851 [2024-07-24 19:50:04.764694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.851 [2024-07-24 19:50:04.778032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.851 [2024-07-24 19:50:04.778047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.851 [2024-07-24 19:50:04.790708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.851 [2024-07-24 19:50:04.790723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.112 [2024-07-24 19:50:04.803339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.112 [2024-07-24 19:50:04.803358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.112 [2024-07-24 19:50:04.816119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.112 [2024-07-24 19:50:04.816134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.112 [2024-07-24 19:50:04.828804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.112 [2024-07-24 19:50:04.828819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.112 [2024-07-24 19:50:04.842285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.112 [2024-07-24 19:50:04.842301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.112 [2024-07-24 19:50:04.855674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.112 [2024-07-24 19:50:04.855689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.112 [2024-07-24 19:50:04.869039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.112 [2024-07-24 19:50:04.869054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.112 [2024-07-24 19:50:04.882536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.112 [2024-07-24 19:50:04.882551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.112 [2024-07-24 19:50:04.895699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.112 [2024-07-24 19:50:04.895714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.112 [2024-07-24 19:50:04.908836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.112 [2024-07-24 19:50:04.908851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.112 [2024-07-24 19:50:04.921858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.112 [2024-07-24 19:50:04.921873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.113 [2024-07-24 19:50:04.934363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.113 [2024-07-24 19:50:04.934378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.113 [2024-07-24 19:50:04.946677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.113 [2024-07-24 19:50:04.946691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.113 [2024-07-24 19:50:04.960375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.113 [2024-07-24 19:50:04.960390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.113 [2024-07-24 19:50:04.973893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.113 [2024-07-24 19:50:04.973907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.113 [2024-07-24 19:50:04.986825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.113 [2024-07-24 19:50:04.986840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.113 [2024-07-24 19:50:04.999798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.113 [2024-07-24 19:50:04.999812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.113 [2024-07-24 19:50:05.012709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.113 [2024-07-24 19:50:05.012724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.113 [2024-07-24 19:50:05.025113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.113 [2024-07-24 19:50:05.025128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.113 [2024-07-24 19:50:05.038644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.113 [2024-07-24 19:50:05.038659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.113 [2024-07-24 19:50:05.052020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.113 [2024-07-24 19:50:05.052038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.113 [2024-07-24 19:50:05.065521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.113 [2024-07-24 19:50:05.065536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.374 [2024-07-24 19:50:05.078851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.374 [2024-07-24 19:50:05.078868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.374 [2024-07-24 19:50:05.091664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.374 [2024-07-24 19:50:05.091679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.374 [2024-07-24 19:50:05.104229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.374 [2024-07-24 19:50:05.104244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.374 [2024-07-24 19:50:05.117171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.374 [2024-07-24 19:50:05.117186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.374 [2024-07-24 19:50:05.130439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.374 [2024-07-24 19:50:05.130454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.374 [2024-07-24 19:50:05.142705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.374 [2024-07-24 19:50:05.142720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.374 [2024-07-24 19:50:05.155434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.374 [2024-07-24 19:50:05.155449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.374 [2024-07-24 19:50:05.167801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.374 [2024-07-24 19:50:05.167816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.374 [2024-07-24 19:50:05.180718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.374 [2024-07-24 19:50:05.180733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.374 [2024-07-24 19:50:05.193778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.374 [2024-07-24 19:50:05.193792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.374 [2024-07-24 19:50:05.207043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.374 [2024-07-24 19:50:05.207058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.374 [2024-07-24 19:50:05.220197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.374 [2024-07-24 19:50:05.220216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.374 [2024-07-24 19:50:05.233204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.374 [2024-07-24 19:50:05.233218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.374 [2024-07-24 19:50:05.246120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.374 [2024-07-24 19:50:05.246135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.374 [2024-07-24 19:50:05.259313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.374 [2024-07-24 19:50:05.259328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.374 [2024-07-24 19:50:05.272022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.374 [2024-07-24 19:50:05.272037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.374 [2024-07-24 19:50:05.284264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.374 [2024-07-24 19:50:05.284279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.374 [2024-07-24 19:50:05.297670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.374 [2024-07-24 19:50:05.297689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.374 [2024-07-24 19:50:05.310860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.374 [2024-07-24 19:50:05.310875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.374 [2024-07-24 19:50:05.323739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.374 [2024-07-24 19:50:05.323753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.635 [2024-07-24 19:50:05.337307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.635 [2024-07-24 19:50:05.337323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.635 [2024-07-24 19:50:05.350868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.635 [2024-07-24 19:50:05.350883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.635 [2024-07-24 19:50:05.363462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.635 [2024-07-24 19:50:05.363476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.635 [2024-07-24 19:50:05.376682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.635 [2024-07-24 19:50:05.376696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.635 [2024-07-24 19:50:05.390271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.635 [2024-07-24 19:50:05.390286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.635 [2024-07-24 19:50:05.403469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.635 [2024-07-24 19:50:05.403483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.635 [2024-07-24 19:50:05.416464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.635 [2024-07-24 19:50:05.416478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.635 [2024-07-24 19:50:05.429881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.635 [2024-07-24 19:50:05.429895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.635 [2024-07-24 19:50:05.443416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.635 [2024-07-24 19:50:05.443430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.635 [2024-07-24 19:50:05.457219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.635 [2024-07-24 19:50:05.457234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.635 [2024-07-24 19:50:05.470203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.635 [2024-07-24 19:50:05.470218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.635 [2024-07-24 19:50:05.483499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.635 [2024-07-24 19:50:05.483514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.635 [2024-07-24 19:50:05.497100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.635 [2024-07-24 19:50:05.497115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.635 [2024-07-24 19:50:05.509992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.635 [2024-07-24 19:50:05.510006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.635 [2024-07-24 19:50:05.522655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.635 [2024-07-24 19:50:05.522670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.635 [2024-07-24 19:50:05.535496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.635 [2024-07-24 19:50:05.535511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.635 [2024-07-24 19:50:05.547795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.635 [2024-07-24 19:50:05.547814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.635 [2024-07-24 19:50:05.560583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.635 [2024-07-24 19:50:05.560597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.635 [2024-07-24 19:50:05.573440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.635 [2024-07-24 19:50:05.573454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.635 [2024-07-24 19:50:05.586326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.635 [2024-07-24 19:50:05.586340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.896 [2024-07-24 19:50:05.599914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.896 [2024-07-24 19:50:05.599929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.896 [2024-07-24 19:50:05.612710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.896 [2024-07-24 19:50:05.612724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.896 [2024-07-24 19:50:05.625545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.896 [2024-07-24 19:50:05.625560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.896 [2024-07-24 19:50:05.638740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.896 [2024-07-24 19:50:05.638755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.896 [2024-07-24 19:50:05.652065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.896 [2024-07-24 19:50:05.652080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.896 [2024-07-24 19:50:05.664714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.896 [2024-07-24 19:50:05.664729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.896 [2024-07-24 19:50:05.677756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.896 [2024-07-24 19:50:05.677771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.896 [2024-07-24 19:50:05.690892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.896 [2024-07-24 19:50:05.690907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.896 [2024-07-24 19:50:05.704046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.896 [2024-07-24 19:50:05.704060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.896 [2024-07-24 19:50:05.716912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.896 [2024-07-24 19:50:05.716927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.896 [2024-07-24 19:50:05.730089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.896 [2024-07-24 19:50:05.730103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.896 [2024-07-24 19:50:05.742914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.896 [2024-07-24 19:50:05.742929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.896 [2024-07-24 19:50:05.756056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.896 [2024-07-24 19:50:05.756071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.896 [2024-07-24 19:50:05.769257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.896 [2024-07-24 19:50:05.769272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.896 [2024-07-24 19:50:05.782183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.896 [2024-07-24 19:50:05.782198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.896 [2024-07-24 19:50:05.795798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.896 [2024-07-24 19:50:05.795813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.896 [2024-07-24 19:50:05.809187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.896 [2024-07-24 19:50:05.809206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.896 [2024-07-24 19:50:05.822507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.896 [2024-07-24 19:50:05.822522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.896 [2024-07-24 19:50:05.835893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.896 [2024-07-24 19:50:05.835908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.896 [2024-07-24 19:50:05.849566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.896 [2024-07-24 19:50:05.849580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.157 [2024-07-24 19:50:05.863057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.157 [2024-07-24 19:50:05.863072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.157 [2024-07-24 19:50:05.876326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.157 [2024-07-24 19:50:05.876341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.157 [2024-07-24 19:50:05.889455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.157 [2024-07-24 19:50:05.889471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.157 [2024-07-24 19:50:05.902946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.157 [2024-07-24 19:50:05.902961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.157 [2024-07-24 19:50:05.916051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.157 [2024-07-24 19:50:05.916065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.157 [2024-07-24 19:50:05.929526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.157 [2024-07-24 19:50:05.929541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.157 [2024-07-24 19:50:05.942800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.157 [2024-07-24 19:50:05.942815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.157 [2024-07-24 19:50:05.955955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.157 [2024-07-24 19:50:05.955970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.157 [2024-07-24 19:50:05.968788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.157 [2024-07-24 19:50:05.968803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.157 [2024-07-24 19:50:05.982299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.157 [2024-07-24 19:50:05.982314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.157 [2024-07-24 19:50:05.995466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.157 [2024-07-24 19:50:05.995480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.157 [2024-07-24 19:50:06.008915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.157 [2024-07-24 19:50:06.008930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.157 [2024-07-24 19:50:06.022269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.157 [2024-07-24 19:50:06.022284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.157 [2024-07-24 19:50:06.035237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.157 [2024-07-24 19:50:06.035252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.157 [2024-07-24 19:50:06.047964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.157 [2024-07-24 19:50:06.047978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.157 [2024-07-24 19:50:06.060816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.157 [2024-07-24 19:50:06.060831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.157 [2024-07-24 19:50:06.073083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.157 [2024-07-24 19:50:06.073098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.157 [2024-07-24 19:50:06.086475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.157 [2024-07-24 19:50:06.086489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.157 [2024-07-24 19:50:06.098982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.157 [2024-07-24 19:50:06.098996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.418 [2024-07-24 19:50:06.112089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.418 [2024-07-24 19:50:06.112104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.418 [2024-07-24 19:50:06.125141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.418 [2024-07-24 19:50:06.125156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.418 [2024-07-24 19:50:06.138214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.418 [2024-07-24 19:50:06.138228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.418 [2024-07-24 19:50:06.151799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.418 [2024-07-24 19:50:06.151813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.418 [2024-07-24 19:50:06.165217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.418 [2024-07-24 19:50:06.165232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.418 [2024-07-24 19:50:06.178572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.418 [2024-07-24 19:50:06.178586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.418 [2024-07-24 19:50:06.191697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.418 [2024-07-24 19:50:06.191713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.418 [2024-07-24 19:50:06.204901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.418 [2024-07-24 19:50:06.204915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.418 [2024-07-24 19:50:06.217693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.418 [2024-07-24 19:50:06.217708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.418 [2024-07-24 19:50:06.231228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.418 [2024-07-24 19:50:06.231243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.418 [2024-07-24 19:50:06.244355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.418 [2024-07-24 19:50:06.244370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.418 [2024-07-24 19:50:06.256744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.418 [2024-07-24 19:50:06.256759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.418 [2024-07-24 19:50:06.269281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.418 [2024-07-24 19:50:06.269295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.418 [2024-07-24 19:50:06.282659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.418 [2024-07-24 19:50:06.282673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.418 [2024-07-24 19:50:06.295786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.418 [2024-07-24 19:50:06.295801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.418 [2024-07-24 19:50:06.309098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.418 [2024-07-24 19:50:06.309113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.418 [2024-07-24 19:50:06.321833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.418 [2024-07-24 19:50:06.321848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.418 [2024-07-24 19:50:06.334807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.418 [2024-07-24 19:50:06.334823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.418 [2024-07-24 19:50:06.347832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.418 [2024-07-24 19:50:06.347847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.418 [2024-07-24 19:50:06.361053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.418 [2024-07-24 19:50:06.361068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.679 [2024-07-24 19:50:06.374105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.679 [2024-07-24 19:50:06.374120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.679 [2024-07-24 19:50:06.387541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.679 [2024-07-24 19:50:06.387556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.679 [2024-07-24 19:50:06.400154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.679 [2024-07-24 19:50:06.400168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.679 [2024-07-24 19:50:06.412836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.679 [2024-07-24 19:50:06.412851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.679 [2024-07-24 19:50:06.426311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.679 [2024-07-24 19:50:06.426326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.679 [2024-07-24 19:50:06.439703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.679 [2024-07-24 19:50:06.439718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.679 [2024-07-24 19:50:06.453348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.679 [2024-07-24 19:50:06.453363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.679 [2024-07-24 19:50:06.466695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.679 [2024-07-24 19:50:06.466710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.679 [2024-07-24 19:50:06.479745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.679 [2024-07-24 19:50:06.479760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.679 [2024-07-24 19:50:06.493245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.679 [2024-07-24 19:50:06.493260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.679 [2024-07-24 19:50:06.506566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.679 [2024-07-24 19:50:06.506581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.679 [2024-07-24 19:50:06.519968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.679 [2024-07-24 19:50:06.519983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.679 [2024-07-24 19:50:06.533421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.679 [2024-07-24 19:50:06.533440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.679 [2024-07-24 19:50:06.546515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.679 [2024-07-24 19:50:06.546530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.679 [2024-07-24 19:50:06.559763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.679 [2024-07-24 19:50:06.559778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.679 [2024-07-24 19:50:06.572611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.679 [2024-07-24 19:50:06.572626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.679 [2024-07-24 19:50:06.585899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.679 [2024-07-24 19:50:06.585914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.679 [2024-07-24 19:50:06.599233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.679 [2024-07-24 19:50:06.599248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.679 [2024-07-24 19:50:06.612264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.679 [2024-07-24 19:50:06.612278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.679 [2024-07-24 19:50:06.625443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.679 [2024-07-24 19:50:06.625458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.940 [2024-07-24 19:50:06.638843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.940 [2024-07-24 19:50:06.638858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.940 [2024-07-24 19:50:06.651793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.940 [2024-07-24 19:50:06.651808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.940 [2024-07-24 19:50:06.665446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.940 [2024-07-24 19:50:06.665461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.940 [2024-07-24 19:50:06.679100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.940 [2024-07-24 19:50:06.679115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.940 [2024-07-24 19:50:06.692767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.940 [2024-07-24 19:50:06.692782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.940 [2024-07-24 19:50:06.706167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.940 [2024-07-24 19:50:06.706182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.940 [2024-07-24 19:50:06.718929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.940 [2024-07-24 19:50:06.718944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.940 [2024-07-24 19:50:06.732477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.940 [2024-07-24 19:50:06.732492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.940 [2024-07-24 19:50:06.746257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.940 [2024-07-24 19:50:06.746272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.940 [2024-07-24 19:50:06.759131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.940 [2024-07-24 19:50:06.759146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.940 [2024-07-24 19:50:06.772515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.940 [2024-07-24 19:50:06.772530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.940 [2024-07-24 19:50:06.785764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.940 [2024-07-24 19:50:06.785783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.940 [2024-07-24 19:50:06.798779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.940 [2024-07-24 19:50:06.798793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.940 [2024-07-24 19:50:06.811867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.940 [2024-07-24 19:50:06.811882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.940 [2024-07-24 19:50:06.824824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.940 [2024-07-24 19:50:06.824839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.940 [2024-07-24 19:50:06.838300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.940 [2024-07-24 19:50:06.838315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.940 [2024-07-24 19:50:06.850953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.940 [2024-07-24 19:50:06.850967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.940 [2024-07-24 19:50:06.864207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.940 [2024-07-24 19:50:06.864222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.940 [2024-07-24 19:50:06.877780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.940 [2024-07-24 19:50:06.877795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.940 [2024-07-24 19:50:06.891118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.940 [2024-07-24 19:50:06.891133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.201 [2024-07-24 19:50:06.904644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.201 [2024-07-24 19:50:06.904659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.201 [2024-07-24 19:50:06.917197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.201 [2024-07-24 19:50:06.917216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.201 [2024-07-24 19:50:06.930227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.201 [2024-07-24 19:50:06.930241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.201 [2024-07-24 19:50:06.942935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.201 [2024-07-24 19:50:06.942949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.201 [2024-07-24 19:50:06.956145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.201 [2024-07-24 19:50:06.956160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.201 [2024-07-24 19:50:06.969233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.201 [2024-07-24 19:50:06.969248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.201 [2024-07-24 19:50:06.982002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.201 [2024-07-24 19:50:06.982016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.201 [2024-07-24 19:50:06.995459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.201 [2024-07-24 19:50:06.995474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.201 [2024-07-24 19:50:07.009142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.201 [2024-07-24 19:50:07.009156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.201 [2024-07-24 19:50:07.022273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.201 [2024-07-24 19:50:07.022287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.201 [2024-07-24 19:50:07.035538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.201 [2024-07-24 19:50:07.035556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.201 [2024-07-24 19:50:07.048598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.201 [2024-07-24 19:50:07.048613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.201 [2024-07-24 19:50:07.061846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.201 [2024-07-24 19:50:07.061861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.201 [2024-07-24 19:50:07.074884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.201 [2024-07-24 19:50:07.074898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.201 [2024-07-24 19:50:07.088398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.201 [2024-07-24 19:50:07.088413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.201 [2024-07-24 19:50:07.101715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.201 [2024-07-24 19:50:07.101730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.201 [2024-07-24 19:50:07.114574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.201 [2024-07-24 19:50:07.114588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.201 [2024-07-24 19:50:07.127808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.201 [2024-07-24 19:50:07.127823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.201 [2024-07-24 19:50:07.140945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.201 [2024-07-24 19:50:07.140960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.201 [2024-07-24 19:50:07.154436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.201 [2024-07-24 19:50:07.154450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.461 [2024-07-24 19:50:07.167686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.461 [2024-07-24 19:50:07.167701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.461 [2024-07-24 19:50:07.181442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.461 [2024-07-24 19:50:07.181457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.461 [2024-07-24 19:50:07.194092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.461 [2024-07-24 19:50:07.194106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.461 [2024-07-24 19:50:07.207284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.462 [2024-07-24 19:50:07.207298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.462 [2024-07-24 19:50:07.220481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.462 [2024-07-24 19:50:07.220496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.462 [2024-07-24 19:50:07.233393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.462 [2024-07-24 19:50:07.233408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.462 [2024-07-24 19:50:07.246016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.462 [2024-07-24 19:50:07.246030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.462 [2024-07-24 19:50:07.259496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.462 [2024-07-24 19:50:07.259510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.462 [2024-07-24 19:50:07.272648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.462 [2024-07-24 19:50:07.272663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.462 [2024-07-24 19:50:07.285739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.462 [2024-07-24 19:50:07.285760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.462 [2024-07-24 19:50:07.298699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.462 [2024-07-24 19:50:07.298714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.462 [2024-07-24 19:50:07.312224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.462 [2024-07-24 19:50:07.312240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.462 [2024-07-24 19:50:07.324758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.462 [2024-07-24 19:50:07.324773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.462 [2024-07-24 19:50:07.337604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.462 [2024-07-24 19:50:07.337619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.462 [2024-07-24 19:50:07.350209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.462 [2024-07-24 19:50:07.350224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.462 [2024-07-24 19:50:07.362686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.462 [2024-07-24 19:50:07.362701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.462 [2024-07-24 19:50:07.375920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.462 [2024-07-24 19:50:07.375935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.462 [2024-07-24 19:50:07.389035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.462 [2024-07-24 19:50:07.389049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.462 [2024-07-24 19:50:07.401845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.462 [2024-07-24 19:50:07.401859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.462 [2024-07-24 19:50:07.414538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.462 [2024-07-24 19:50:07.414552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.722 [2024-07-24 19:50:07.427572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.722 [2024-07-24 19:50:07.427588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.722 [2024-07-24 19:50:07.440484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.722 [2024-07-24 19:50:07.440498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.722 [2024-07-24 19:50:07.453860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.722 [2024-07-24 19:50:07.453875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.722 [2024-07-24 19:50:07.467374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.722 [2024-07-24 19:50:07.467388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.722 [2024-07-24 19:50:07.480710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.722 [2024-07-24 19:50:07.480725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.723 [2024-07-24 19:50:07.493658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.723 [2024-07-24 19:50:07.493672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.723 [2024-07-24 19:50:07.506710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.723 [2024-07-24 19:50:07.506725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.723 [2024-07-24 19:50:07.520212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.723 [2024-07-24 19:50:07.520228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.723 [2024-07-24 19:50:07.533492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.723 [2024-07-24 19:50:07.533511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.723 [2024-07-24 19:50:07.546945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.723 [2024-07-24 19:50:07.546960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.723 [2024-07-24 19:50:07.559958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.723 [2024-07-24 19:50:07.559973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.723 [2024-07-24 19:50:07.573514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.723 [2024-07-24 19:50:07.573528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.723 [2024-07-24 19:50:07.586862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.723 [2024-07-24 19:50:07.586876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.723 [2024-07-24 19:50:07.599873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.723 [2024-07-24 19:50:07.599888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.723 [2024-07-24 19:50:07.613081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.723 [2024-07-24 19:50:07.613095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.723 [2024-07-24 19:50:07.626080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.723 [2024-07-24 19:50:07.626094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.723 [2024-07-24 19:50:07.638954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.723 [2024-07-24 19:50:07.638969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.723 [2024-07-24 19:50:07.651901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.723 [2024-07-24 19:50:07.651916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.723 [2024-07-24 19:50:07.664475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.723 [2024-07-24 19:50:07.664490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.984 [2024-07-24 19:50:07.678109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.984 [2024-07-24 19:50:07.678124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.984 [2024-07-24 19:50:07.690916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.984 [2024-07-24 19:50:07.690931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.984 [2024-07-24 19:50:07.704305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.984 [2024-07-24 19:50:07.704320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.984 [2024-07-24 19:50:07.716661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.984 [2024-07-24 19:50:07.716675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.984 [2024-07-24 19:50:07.729691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.984 [2024-07-24 19:50:07.729705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.984 [2024-07-24 19:50:07.743149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.984 [2024-07-24 19:50:07.743164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.984 [2024-07-24 19:50:07.756547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.984 [2024-07-24 19:50:07.756562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.984 [2024-07-24 19:50:07.769415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.984 [2024-07-24 19:50:07.769430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.984 [2024-07-24 19:50:07.782599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.984 [2024-07-24 19:50:07.782614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.984 [2024-07-24 19:50:07.795645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.984 [2024-07-24 19:50:07.795660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.984 [2024-07-24 19:50:07.808058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.984 [2024-07-24 19:50:07.808073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.984 [2024-07-24 19:50:07.820868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.984 [2024-07-24 19:50:07.820882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.984 [2024-07-24 19:50:07.834026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.984 [2024-07-24 19:50:07.834040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.984 [2024-07-24 19:50:07.847431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.984 [2024-07-24 19:50:07.847447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.984 [2024-07-24 19:50:07.861063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.984 [2024-07-24 19:50:07.861078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.984 [2024-07-24 19:50:07.874097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.984 [2024-07-24 19:50:07.874112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.984 [2024-07-24 19:50:07.887234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.984 [2024-07-24 19:50:07.887248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.984 [2024-07-24 19:50:07.900285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.984 [2024-07-24 19:50:07.900300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.984 [2024-07-24 19:50:07.913356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.984 [2024-07-24 19:50:07.913370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.984 [2024-07-24 19:50:07.926360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.984 [2024-07-24 19:50:07.926375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.245 [2024-07-24 19:50:07.938960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.245 [2024-07-24 19:50:07.938975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.245 [2024-07-24 19:50:07.952197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.245 [2024-07-24 19:50:07.952215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.245 [2024-07-24 19:50:07.965165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.245 [2024-07-24 19:50:07.965180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.245 [2024-07-24 19:50:07.977434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.245 [2024-07-24 19:50:07.977448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.245 [2024-07-24 19:50:07.990333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.245 [2024-07-24 19:50:07.990348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.245 [2024-07-24 19:50:08.003373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.245 [2024-07-24 19:50:08.003389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.245 [2024-07-24 19:50:08.016749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.245 [2024-07-24 19:50:08.016764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.245 [2024-07-24 19:50:08.029957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.245 [2024-07-24 19:50:08.029971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.245 [2024-07-24 19:50:08.042762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.245 [2024-07-24 19:50:08.042778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.245 [2024-07-24 19:50:08.055296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.245 [2024-07-24 19:50:08.055312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.245 [2024-07-24 19:50:08.069038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.245 [2024-07-24 19:50:08.069053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.245 [2024-07-24 19:50:08.081964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.245 [2024-07-24 19:50:08.081978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.245 [2024-07-24 19:50:08.095077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.245 [2024-07-24 19:50:08.095092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.245 [2024-07-24 19:50:08.108108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.245 [2024-07-24 19:50:08.108123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.245 [2024-07-24 19:50:08.121352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.245 [2024-07-24 19:50:08.121367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.246 [2024-07-24 19:50:08.134919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.246 [2024-07-24 19:50:08.134935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.246 [2024-07-24 19:50:08.148507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.246 [2024-07-24 19:50:08.148522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.246 [2024-07-24 19:50:08.162077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.246 [2024-07-24 19:50:08.162091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.246 [2024-07-24 19:50:08.175668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.246 [2024-07-24 19:50:08.175683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.246 [2024-07-24 19:50:08.189101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.246 [2024-07-24 19:50:08.189116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.507 [2024-07-24 19:50:08.201835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.507 [2024-07-24 19:50:08.201850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.507 [2024-07-24 19:50:08.215335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.507 [2024-07-24 19:50:08.215357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.507 [2024-07-24 19:50:08.228252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.507 [2024-07-24 19:50:08.228267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.507 [2024-07-24 19:50:08.241821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.507 [2024-07-24 19:50:08.241836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.507 [2024-07-24 19:50:08.254611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.507 [2024-07-24 19:50:08.254626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.507 [2024-07-24 19:50:08.267805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.507 [2024-07-24 19:50:08.267820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.507 [2024-07-24 19:50:08.280475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.507 [2024-07-24 19:50:08.280490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.507 [2024-07-24 19:50:08.293538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.507 [2024-07-24 19:50:08.293553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.507 [2024-07-24 19:50:08.306316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.507 [2024-07-24 19:50:08.306331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.507 [2024-07-24 19:50:08.318566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.507 [2024-07-24 19:50:08.318581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.507 [2024-07-24 19:50:08.331120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.507 [2024-07-24 19:50:08.331135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.507 [2024-07-24 19:50:08.343875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.507 [2024-07-24 19:50:08.343890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.507 [2024-07-24 19:50:08.356945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.507 [2024-07-24 19:50:08.356959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.507 [2024-07-24 19:50:08.369640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.507 [2024-07-24 19:50:08.369655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.507 [2024-07-24 19:50:08.382903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.507 [2024-07-24 19:50:08.382918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.507 [2024-07-24 19:50:08.395613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.507 [2024-07-24 19:50:08.395628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.507 [2024-07-24 19:50:08.408572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.507 [2024-07-24 19:50:08.408587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.507 [2024-07-24 19:50:08.421240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.508 [2024-07-24 19:50:08.421255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.508 [2024-07-24 19:50:08.434559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.508 [2024-07-24 19:50:08.434574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.508 [2024-07-24 19:50:08.447150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.508 [2024-07-24 19:50:08.447165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.508 [2024-07-24 19:50:08.460229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.508 [2024-07-24 19:50:08.460244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.769 [2024-07-24 19:50:08.472807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.769 [2024-07-24 19:50:08.472823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.769 [2024-07-24 19:50:08.485564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.769 [2024-07-24 19:50:08.485579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.769 [2024-07-24 19:50:08.498646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.769 [2024-07-24 19:50:08.498661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.769 [2024-07-24 19:50:08.511682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.769 [2024-07-24 19:50:08.511701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.769 [2024-07-24 19:50:08.524225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.769 [2024-07-24 19:50:08.524240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.769 [2024-07-24 19:50:08.537035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.769 [2024-07-24 19:50:08.537050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.769 [2024-07-24 19:50:08.550489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.769 [2024-07-24 19:50:08.550504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.769 [2024-07-24 19:50:08.563493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.769 [2024-07-24 19:50:08.563508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.769 [2024-07-24 19:50:08.577060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.769 [2024-07-24 19:50:08.577075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.769 [2024-07-24 19:50:08.589801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.769 [2024-07-24 19:50:08.589816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.769 [2024-07-24 19:50:08.602705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.769 [2024-07-24 19:50:08.602721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.769 [2024-07-24 19:50:08.615438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.769 [2024-07-24 19:50:08.615453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.769 [2024-07-24 19:50:08.627896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.769 [2024-07-24 19:50:08.627911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.769 [2024-07-24 19:50:08.641222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.769 [2024-07-24 19:50:08.641237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.769 [2024-07-24 19:50:08.653708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.769 [2024-07-24 19:50:08.653722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.769 [2024-07-24 19:50:08.666812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.769 [2024-07-24 19:50:08.666827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.769 [2024-07-24 19:50:08.680274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.769 [2024-07-24 19:50:08.680289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.769 00:10:20.769 Latency(us) 00:10:20.769 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:20.769 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:20.769 Nvme1n1 : 5.00 19390.10 151.49 0.00 0.00 6594.85 2839.89 25012.91 00:10:20.769 =================================================================================================================== 00:10:20.769 Total : 19390.10 151.49 0.00 0.00 6594.85 2839.89 25012.91 00:10:20.769 [2024-07-24 19:50:08.689906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.769 [2024-07-24 19:50:08.689920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.769 [2024-07-24 19:50:08.701934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.769 [2024-07-24 19:50:08.701945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.769 [2024-07-24 19:50:08.713972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.769 [2024-07-24 19:50:08.713989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.031 [2024-07-24 19:50:08.725999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.031 [2024-07-24 19:50:08.726009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.031 [2024-07-24 19:50:08.738037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.031 [2024-07-24 19:50:08.738049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.031 [2024-07-24 19:50:08.750053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.031 [2024-07-24 19:50:08.750063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.031 [2024-07-24 19:50:08.762084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.031 [2024-07-24 19:50:08.762093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.031 [2024-07-24 19:50:08.774116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.031 [2024-07-24 19:50:08.774125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.031 [2024-07-24 19:50:08.786146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.031 [2024-07-24 19:50:08.786156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.031 [2024-07-24 19:50:08.798177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.031 [2024-07-24 19:50:08.798186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.031 [2024-07-24 19:50:08.810210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.031 [2024-07-24 19:50:08.810218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.031 [2024-07-24 19:50:08.822241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.031 [2024-07-24 19:50:08.822250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3543390) - No such process 00:10:21.031 19:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3543390 00:10:21.031 19:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.031 19:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.031 19:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:21.031 19:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.031 19:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:21.031 19:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.031 19:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:21.031 delay0 00:10:21.031 19:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.031 19:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:21.031 19:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.031 19:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:21.031 19:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.031 19:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:21.031 EAL: No free 2048 kB hugepages reported on node 1 00:10:21.031 [2024-07-24 19:50:08.908643] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:29.171 Initializing NVMe Controllers 00:10:29.171 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:29.171 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:29.171 Initialization complete. Launching workers. 00:10:29.171 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 264, failed: 13417 00:10:29.171 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 13574, failed to submit 107 00:10:29.171 success 13493, unsuccess 81, failed 0 00:10:29.171 19:50:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:29.171 19:50:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:29.171 19:50:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:29.171 19:50:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:10:29.171 19:50:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:29.171 19:50:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:10:29.171 19:50:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:29.171 19:50:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:29.171 rmmod nvme_tcp 00:10:29.171 rmmod nvme_fabrics 00:10:29.171 rmmod nvme_keyring 00:10:29.171 19:50:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:29.171 19:50:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:10:29.171 19:50:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:10:29.171 19:50:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3541110 ']' 00:10:29.171 19:50:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3541110 00:10:29.171 19:50:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 3541110 ']' 00:10:29.171 19:50:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 3541110 00:10:29.171 19:50:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:10:29.171 19:50:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:29.171 19:50:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3541110 00:10:29.171 19:50:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:29.171 19:50:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:29.171 19:50:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3541110' 00:10:29.171 killing process with pid 3541110 00:10:29.171 19:50:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 3541110 00:10:29.171 19:50:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 3541110 00:10:29.171 19:50:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:29.171 19:50:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:29.171 19:50:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:29.171 19:50:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:29.171 19:50:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:29.171 19:50:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.171 19:50:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.171 19:50:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.114 19:50:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:30.114 00:10:30.114 real 0m33.548s 00:10:30.114 user 0m44.051s 00:10:30.114 sys 0m11.549s 00:10:30.115 19:50:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:30.115 19:50:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:30.115 ************************************ 00:10:30.115 END TEST nvmf_zcopy 00:10:30.115 ************************************ 00:10:30.115 19:50:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:30.115 19:50:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:30.115 19:50:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:30.115 19:50:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:30.115 ************************************ 00:10:30.115 START TEST nvmf_nmic 00:10:30.115 ************************************ 00:10:30.115 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:30.376 * Looking for test storage... 00:10:30.376 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.376 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:30.376 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:30.376 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:30.376 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:30.376 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:30.376 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:30.376 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:30.376 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:30.376 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:30.376 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:30.376 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:30.376 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.376 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:30.377 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:30.377 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.377 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.377 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:30.377 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.377 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:30.377 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.377 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.377 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.377 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.377 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.377 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.377 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:30.377 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.377 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:10:30.377 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:30.377 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:30.377 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.377 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.377 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.377 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:30.377 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:30.377 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:30.377 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:30.377 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:30.377 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:30.377 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:30.377 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:30.377 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:30.377 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:30.377 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:30.377 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.377 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.377 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.377 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:30.377 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:30.377 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:10:30.377 19:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:36.969 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:36.969 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:36.969 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:36.969 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:36.969 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:36.970 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:36.970 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:36.970 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:36.970 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:36.970 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:36.970 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:36.970 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:36.970 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:36.970 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:36.970 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:36.970 19:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:37.231 19:50:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:37.231 19:50:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:37.231 19:50:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:37.231 19:50:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:37.231 19:50:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:37.231 19:50:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:37.492 19:50:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:37.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:37.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.437 ms 00:10:37.492 00:10:37.492 --- 10.0.0.2 ping statistics --- 00:10:37.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.492 rtt min/avg/max/mdev = 0.437/0.437/0.437/0.000 ms 00:10:37.492 19:50:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:37.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:37.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:10:37.493 00:10:37.493 --- 10.0.0.1 ping statistics --- 00:10:37.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.493 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:10:37.493 19:50:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:37.493 19:50:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:10:37.493 19:50:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:37.493 19:50:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:37.493 19:50:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:37.493 19:50:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:37.493 19:50:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:37.493 19:50:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:37.493 19:50:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:37.493 19:50:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:37.493 19:50:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:37.493 19:50:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:37.493 19:50:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.493 19:50:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3549983 00:10:37.493 19:50:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3549983 00:10:37.493 19:50:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:37.493 19:50:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 3549983 ']' 00:10:37.493 19:50:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.493 19:50:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:37.493 19:50:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.493 19:50:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:37.493 19:50:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.493 [2024-07-24 19:50:25.322824] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:10:37.493 [2024-07-24 19:50:25.322889] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.493 EAL: No free 2048 kB hugepages reported on node 1 00:10:37.493 [2024-07-24 19:50:25.393009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:37.754 [2024-07-24 19:50:25.469547] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.754 [2024-07-24 19:50:25.469587] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.754 [2024-07-24 19:50:25.469594] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:37.754 [2024-07-24 19:50:25.469601] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:37.754 [2024-07-24 19:50:25.469606] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.754 [2024-07-24 19:50:25.469742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.754 [2024-07-24 19:50:25.469873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:37.754 [2024-07-24 19:50:25.470031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.754 [2024-07-24 19:50:25.470032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:38.326 [2024-07-24 19:50:26.159178] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:38.326 Malloc0 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:38.326 [2024-07-24 19:50:26.218488] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:38.326 test case1: single bdev can't be used in multiple subsystems 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:38.326 [2024-07-24 19:50:26.254439] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:38.326 [2024-07-24 19:50:26.254457] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:38.326 [2024-07-24 19:50:26.254464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.326 request: 00:10:38.326 { 00:10:38.326 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:38.326 "namespace": { 00:10:38.326 "bdev_name": "Malloc0", 00:10:38.326 "no_auto_visible": false 00:10:38.326 }, 00:10:38.326 "method": "nvmf_subsystem_add_ns", 00:10:38.326 "req_id": 1 00:10:38.326 } 00:10:38.326 Got JSON-RPC error response 00:10:38.326 response: 00:10:38.326 { 00:10:38.326 "code": -32602, 00:10:38.326 "message": "Invalid parameters" 00:10:38.326 } 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:38.326 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:38.326 Adding namespace failed - expected result. 00:10:38.327 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:38.327 test case2: host connect to nvmf target in multiple paths 00:10:38.327 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:38.327 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.327 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:38.327 [2024-07-24 19:50:26.266590] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:38.327 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.327 19:50:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:40.289 19:50:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:41.675 19:50:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:41.675 19:50:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:41.675 19:50:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:41.675 19:50:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:41.675 19:50:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:43.589 19:50:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:43.589 19:50:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:43.589 19:50:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:43.589 19:50:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:43.589 19:50:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:43.589 19:50:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:43.589 19:50:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:43.589 [global] 00:10:43.589 thread=1 00:10:43.589 invalidate=1 00:10:43.589 rw=write 00:10:43.589 time_based=1 00:10:43.589 runtime=1 00:10:43.589 ioengine=libaio 00:10:43.589 direct=1 00:10:43.589 bs=4096 00:10:43.589 iodepth=1 00:10:43.589 norandommap=0 00:10:43.589 numjobs=1 00:10:43.589 00:10:43.589 verify_dump=1 00:10:43.589 verify_backlog=512 00:10:43.589 verify_state_save=0 00:10:43.589 do_verify=1 00:10:43.589 verify=crc32c-intel 00:10:43.589 [job0] 00:10:43.589 filename=/dev/nvme0n1 00:10:43.589 Could not set queue depth (nvme0n1) 00:10:43.850 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:43.850 fio-3.35 00:10:43.850 Starting 1 thread 00:10:45.235 00:10:45.235 job0: (groupid=0, jobs=1): err= 0: pid=3551424: Wed Jul 24 19:50:32 2024 00:10:45.235 read: IOPS=364, BW=1459KiB/s (1494kB/s)(1460KiB/1001msec) 00:10:45.235 slat (nsec): min=24419, max=58001, avg=25435.46, stdev=3690.03 00:10:45.235 clat (usec): min=1115, max=1548, avg=1367.14, stdev=60.16 00:10:45.235 lat (usec): min=1140, max=1572, avg=1392.58, stdev=60.25 00:10:45.235 clat percentiles (usec): 00:10:45.235 | 1.00th=[ 1221], 5.00th=[ 1254], 10.00th=[ 1287], 20.00th=[ 1319], 00:10:45.235 | 30.00th=[ 1352], 40.00th=[ 1352], 50.00th=[ 1369], 60.00th=[ 1385], 00:10:45.235 | 70.00th=[ 1401], 80.00th=[ 1418], 90.00th=[ 1434], 95.00th=[ 1450], 00:10:45.235 | 99.00th=[ 1500], 99.50th=[ 1516], 99.90th=[ 1549], 99.95th=[ 1549], 00:10:45.235 | 99.99th=[ 1549] 00:10:45.235 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:10:45.235 slat (usec): min=10, max=24810, avg=81.14, stdev=1095.07 00:10:45.235 clat (usec): min=587, max=1034, avg=866.48, stdev=77.37 00:10:45.235 lat (usec): min=620, max=25664, avg=947.62, stdev=1097.32 00:10:45.235 clat percentiles (usec): 00:10:45.235 | 1.00th=[ 644], 5.00th=[ 717], 10.00th=[ 766], 20.00th=[ 799], 00:10:45.235 | 30.00th=[ 832], 40.00th=[ 865], 50.00th=[ 881], 60.00th=[ 898], 00:10:45.235 | 70.00th=[ 914], 80.00th=[ 930], 90.00th=[ 955], 95.00th=[ 971], 00:10:45.235 | 99.00th=[ 1004], 99.50th=[ 1012], 99.90th=[ 1037], 99.95th=[ 1037], 00:10:45.235 | 99.99th=[ 1037] 00:10:45.235 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:45.235 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:45.235 lat (usec) : 750=4.56%, 1000=53.25% 00:10:45.235 lat (msec) : 2=42.19% 00:10:45.235 cpu : usr=1.40%, sys=2.60%, ctx=880, majf=0, minf=1 00:10:45.235 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.235 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.235 issued rwts: total=365,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.235 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.235 00:10:45.235 Run status group 0 (all jobs): 00:10:45.235 READ: bw=1459KiB/s (1494kB/s), 1459KiB/s-1459KiB/s (1494kB/s-1494kB/s), io=1460KiB (1495kB), run=1001-1001msec 00:10:45.235 WRITE: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:10:45.235 00:10:45.235 Disk stats (read/write): 00:10:45.235 nvme0n1: ios=318/512, merge=0/0, ticks=1388/390, in_queue=1778, util=98.80% 00:10:45.235 19:50:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:45.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:45.235 19:50:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:45.235 19:50:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:45.235 19:50:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:45.235 19:50:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.235 19:50:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:45.235 19:50:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.235 19:50:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:45.235 19:50:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:45.235 19:50:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:45.235 19:50:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:45.235 19:50:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:10:45.235 19:50:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:45.235 19:50:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:10:45.235 19:50:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:45.235 19:50:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:45.235 rmmod nvme_tcp 00:10:45.235 rmmod nvme_fabrics 00:10:45.235 rmmod nvme_keyring 00:10:45.236 19:50:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:45.236 19:50:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:10:45.236 19:50:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:10:45.236 19:50:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3549983 ']' 00:10:45.236 19:50:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3549983 00:10:45.236 19:50:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 3549983 ']' 00:10:45.236 19:50:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 3549983 00:10:45.236 19:50:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:45.236 19:50:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:45.236 19:50:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3549983 00:10:45.497 19:50:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:45.497 19:50:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:45.497 19:50:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3549983' 00:10:45.497 killing process with pid 3549983 00:10:45.497 19:50:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 3549983 00:10:45.497 19:50:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 3549983 00:10:45.497 19:50:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:45.497 19:50:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:45.497 19:50:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:45.497 19:50:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:45.497 19:50:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:45.497 19:50:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.497 19:50:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:45.497 19:50:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:48.044 00:10:48.044 real 0m17.371s 00:10:48.044 user 0m49.076s 00:10:48.044 sys 0m6.036s 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:48.044 ************************************ 00:10:48.044 END TEST nvmf_nmic 00:10:48.044 ************************************ 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:48.044 ************************************ 00:10:48.044 START TEST nvmf_fio_target 00:10:48.044 ************************************ 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:48.044 * Looking for test storage... 00:10:48.044 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:10:48.044 19:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:56.190 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:56.190 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:56.190 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:56.191 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:56.191 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:56.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:56.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:10:56.191 00:10:56.191 --- 10.0.0.2 ping statistics --- 00:10:56.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.191 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:56.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:56.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.360 ms 00:10:56.191 00:10:56.191 --- 10.0.0.1 ping statistics --- 00:10:56.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.191 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3556084 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3556084 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 3556084 ']' 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:56.191 19:50:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.191 [2024-07-24 19:50:43.021056] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:10:56.191 [2024-07-24 19:50:43.021121] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.191 EAL: No free 2048 kB hugepages reported on node 1 00:10:56.191 [2024-07-24 19:50:43.092274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:56.191 [2024-07-24 19:50:43.166924] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:56.191 [2024-07-24 19:50:43.166964] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:56.191 [2024-07-24 19:50:43.166972] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:56.191 [2024-07-24 19:50:43.166978] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:56.191 [2024-07-24 19:50:43.166984] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:56.191 [2024-07-24 19:50:43.167126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.191 [2024-07-24 19:50:43.167255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:56.191 [2024-07-24 19:50:43.167352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.191 [2024-07-24 19:50:43.167353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:56.191 19:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:56.191 19:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:56.191 19:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:56.191 19:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:56.191 19:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.191 19:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:56.191 19:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:56.191 [2024-07-24 19:50:43.984565] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:56.191 19:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:56.453 19:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:56.453 19:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:56.453 19:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:56.453 19:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:56.714 19:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:56.714 19:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:56.975 19:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:56.975 19:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:56.975 19:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:57.236 19:50:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:57.236 19:50:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:57.497 19:50:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:57.497 19:50:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:57.497 19:50:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:57.497 19:50:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:57.758 19:50:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:58.018 19:50:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:58.018 19:50:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:58.018 19:50:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:58.018 19:50:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:58.279 19:50:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:58.539 [2024-07-24 19:50:46.250345] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:58.539 19:50:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:58.539 19:50:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:58.800 19:50:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:00.714 19:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:00.714 19:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:11:00.714 19:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:00.714 19:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:11:00.714 19:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:11:00.714 19:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:11:02.659 19:50:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:02.659 19:50:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:02.659 19:50:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:02.659 19:50:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:11:02.659 19:50:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:02.659 19:50:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:11:02.659 19:50:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:02.659 [global] 00:11:02.659 thread=1 00:11:02.659 invalidate=1 00:11:02.659 rw=write 00:11:02.659 time_based=1 00:11:02.659 runtime=1 00:11:02.659 ioengine=libaio 00:11:02.659 direct=1 00:11:02.659 bs=4096 00:11:02.659 iodepth=1 00:11:02.659 norandommap=0 00:11:02.659 numjobs=1 00:11:02.659 00:11:02.659 verify_dump=1 00:11:02.659 verify_backlog=512 00:11:02.659 verify_state_save=0 00:11:02.659 do_verify=1 00:11:02.659 verify=crc32c-intel 00:11:02.659 [job0] 00:11:02.659 filename=/dev/nvme0n1 00:11:02.659 [job1] 00:11:02.659 filename=/dev/nvme0n2 00:11:02.659 [job2] 00:11:02.659 filename=/dev/nvme0n3 00:11:02.659 [job3] 00:11:02.659 filename=/dev/nvme0n4 00:11:02.659 Could not set queue depth (nvme0n1) 00:11:02.659 Could not set queue depth (nvme0n2) 00:11:02.659 Could not set queue depth (nvme0n3) 00:11:02.659 Could not set queue depth (nvme0n4) 00:11:02.919 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:02.919 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:02.919 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:02.919 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:02.919 fio-3.35 00:11:02.919 Starting 4 threads 00:11:04.333 00:11:04.333 job0: (groupid=0, jobs=1): err= 0: pid=3557692: Wed Jul 24 19:50:51 2024 00:11:04.333 read: IOPS=14, BW=57.9KiB/s (59.3kB/s)(60.0KiB/1036msec) 00:11:04.333 slat (nsec): min=24466, max=25396, avg=24855.73, stdev=221.45 00:11:04.333 clat (usec): min=1055, max=42095, avg=36466.79, stdev=14310.44 00:11:04.333 lat (usec): min=1080, max=42120, avg=36491.65, stdev=14310.38 00:11:04.333 clat percentiles (usec): 00:11:04.333 | 1.00th=[ 1057], 5.00th=[ 1057], 10.00th=[ 1385], 20.00th=[41681], 00:11:04.333 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:11:04.333 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:04.333 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:04.333 | 99.99th=[42206] 00:11:04.333 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:11:04.333 slat (nsec): min=9671, max=53324, avg=32989.86, stdev=5228.76 00:11:04.333 clat (usec): min=552, max=3036, avg=912.97, stdev=135.68 00:11:04.333 lat (usec): min=586, max=3070, avg=945.96, stdev=136.26 00:11:04.333 clat percentiles (usec): 00:11:04.333 | 1.00th=[ 660], 5.00th=[ 742], 10.00th=[ 799], 20.00th=[ 840], 00:11:04.333 | 30.00th=[ 873], 40.00th=[ 898], 50.00th=[ 914], 60.00th=[ 938], 00:11:04.333 | 70.00th=[ 955], 80.00th=[ 971], 90.00th=[ 996], 95.00th=[ 1045], 00:11:04.333 | 99.00th=[ 1237], 99.50th=[ 1385], 99.90th=[ 3032], 99.95th=[ 3032], 00:11:04.333 | 99.99th=[ 3032] 00:11:04.333 bw ( KiB/s): min= 4096, max= 4096, per=51.80%, avg=4096.00, stdev= 0.00, samples=1 00:11:04.333 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:04.333 lat (usec) : 750=4.93%, 1000=83.87% 00:11:04.333 lat (msec) : 2=8.54%, 4=0.19%, 50=2.47% 00:11:04.333 cpu : usr=1.45%, sys=0.97%, ctx=528, majf=0, minf=1 00:11:04.333 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:04.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.333 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.333 issued rwts: total=15,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.333 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:04.333 job1: (groupid=0, jobs=1): err= 0: pid=3557693: Wed Jul 24 19:50:51 2024 00:11:04.333 read: IOPS=13, BW=54.9KiB/s (56.2kB/s)(56.0KiB/1020msec) 00:11:04.333 slat (nsec): min=24472, max=25002, avg=24697.86, stdev=148.46 00:11:04.333 clat (usec): min=1360, max=42090, avg=39075.81, stdev=10855.18 00:11:04.333 lat (usec): min=1385, max=42115, avg=39100.51, stdev=10855.14 00:11:04.333 clat percentiles (usec): 00:11:04.333 | 1.00th=[ 1369], 5.00th=[ 1369], 10.00th=[41681], 20.00th=[41681], 00:11:04.333 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:11:04.333 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:04.333 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:04.333 | 99.99th=[42206] 00:11:04.333 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:11:04.333 slat (nsec): min=9880, max=52611, avg=31872.79, stdev=6623.54 00:11:04.333 clat (usec): min=543, max=1139, avg=883.30, stdev=100.40 00:11:04.333 lat (usec): min=576, max=1172, avg=915.17, stdev=102.43 00:11:04.333 clat percentiles (usec): 00:11:04.333 | 1.00th=[ 603], 5.00th=[ 693], 10.00th=[ 742], 20.00th=[ 799], 00:11:04.333 | 30.00th=[ 840], 40.00th=[ 881], 50.00th=[ 906], 60.00th=[ 922], 00:11:04.333 | 70.00th=[ 947], 80.00th=[ 963], 90.00th=[ 988], 95.00th=[ 1020], 00:11:04.333 | 99.00th=[ 1074], 99.50th=[ 1106], 99.90th=[ 1139], 99.95th=[ 1139], 00:11:04.333 | 99.99th=[ 1139] 00:11:04.333 bw ( KiB/s): min= 4096, max= 4096, per=51.80%, avg=4096.00, stdev= 0.00, samples=1 00:11:04.333 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:04.333 lat (usec) : 750=11.03%, 1000=77.95% 00:11:04.333 lat (msec) : 2=8.56%, 50=2.47% 00:11:04.333 cpu : usr=0.79%, sys=1.57%, ctx=527, majf=0, minf=1 00:11:04.333 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:04.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.333 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.333 issued rwts: total=14,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.333 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:04.333 job2: (groupid=0, jobs=1): err= 0: pid=3557694: Wed Jul 24 19:50:51 2024 00:11:04.333 read: IOPS=12, BW=50.6KiB/s (51.8kB/s)(52.0KiB/1027msec) 00:11:04.333 slat (nsec): min=26140, max=41008, avg=27477.31, stdev=4069.29 00:11:04.333 clat (usec): min=41629, max=42320, avg=41987.05, stdev=177.91 00:11:04.333 lat (usec): min=41670, max=42347, avg=42014.53, stdev=175.53 00:11:04.333 clat percentiles (usec): 00:11:04.333 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:11:04.333 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:11:04.333 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:04.333 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:04.333 | 99.99th=[42206] 00:11:04.333 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:11:04.333 slat (nsec): min=9562, max=53395, avg=34354.75, stdev=5291.43 00:11:04.333 clat (usec): min=554, max=1260, avg=897.20, stdev=97.40 00:11:04.333 lat (usec): min=589, max=1295, avg=931.56, stdev=98.72 00:11:04.333 clat percentiles (usec): 00:11:04.333 | 1.00th=[ 635], 5.00th=[ 717], 10.00th=[ 766], 20.00th=[ 816], 00:11:04.333 | 30.00th=[ 865], 40.00th=[ 898], 50.00th=[ 914], 60.00th=[ 930], 00:11:04.333 | 70.00th=[ 955], 80.00th=[ 971], 90.00th=[ 1004], 95.00th=[ 1037], 00:11:04.333 | 99.00th=[ 1090], 99.50th=[ 1139], 99.90th=[ 1254], 99.95th=[ 1254], 00:11:04.333 | 99.99th=[ 1254] 00:11:04.333 bw ( KiB/s): min= 4096, max= 4096, per=51.80%, avg=4096.00, stdev= 0.00, samples=1 00:11:04.333 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:04.333 lat (usec) : 750=8.00%, 1000=79.62% 00:11:04.333 lat (msec) : 2=9.90%, 50=2.48% 00:11:04.333 cpu : usr=0.58%, sys=2.63%, ctx=527, majf=0, minf=1 00:11:04.333 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:04.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.333 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.333 issued rwts: total=13,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.333 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:04.333 job3: (groupid=0, jobs=1): err= 0: pid=3557697: Wed Jul 24 19:50:51 2024 00:11:04.333 read: IOPS=13, BW=55.1KiB/s (56.4kB/s)(56.0KiB/1016msec) 00:11:04.333 slat (nsec): min=26678, max=27163, avg=26970.29, stdev=132.02 00:11:04.333 clat (usec): min=1498, max=42012, avg=39063.24, stdev=10811.88 00:11:04.333 lat (usec): min=1525, max=42039, avg=39090.21, stdev=10811.91 00:11:04.333 clat percentiles (usec): 00:11:04.333 | 1.00th=[ 1500], 5.00th=[ 1500], 10.00th=[41681], 20.00th=[41681], 00:11:04.333 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:11:04.333 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:04.333 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:04.333 | 99.99th=[42206] 00:11:04.333 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:11:04.333 slat (nsec): min=9748, max=68733, avg=34675.81, stdev=7261.56 00:11:04.333 clat (usec): min=458, max=1132, avg=873.39, stdev=117.50 00:11:04.334 lat (usec): min=471, max=1167, avg=908.06, stdev=119.88 00:11:04.334 clat percentiles (usec): 00:11:04.334 | 1.00th=[ 506], 5.00th=[ 652], 10.00th=[ 717], 20.00th=[ 783], 00:11:04.334 | 30.00th=[ 816], 40.00th=[ 865], 50.00th=[ 898], 60.00th=[ 922], 00:11:04.334 | 70.00th=[ 947], 80.00th=[ 971], 90.00th=[ 996], 95.00th=[ 1037], 00:11:04.334 | 99.00th=[ 1074], 99.50th=[ 1090], 99.90th=[ 1139], 99.95th=[ 1139], 00:11:04.334 | 99.99th=[ 1139] 00:11:04.334 bw ( KiB/s): min= 4096, max= 4096, per=51.80%, avg=4096.00, stdev= 0.00, samples=1 00:11:04.334 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:04.334 lat (usec) : 500=0.76%, 750=12.17%, 1000=74.90% 00:11:04.334 lat (msec) : 2=9.70%, 50=2.47% 00:11:04.334 cpu : usr=0.59%, sys=2.66%, ctx=527, majf=0, minf=1 00:11:04.334 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:04.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.334 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.334 issued rwts: total=14,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.334 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:04.334 00:11:04.334 Run status group 0 (all jobs): 00:11:04.334 READ: bw=216KiB/s (221kB/s), 50.6KiB/s-57.9KiB/s (51.8kB/s-59.3kB/s), io=224KiB (229kB), run=1016-1036msec 00:11:04.334 WRITE: bw=7907KiB/s (8097kB/s), 1977KiB/s-2016KiB/s (2024kB/s-2064kB/s), io=8192KiB (8389kB), run=1016-1036msec 00:11:04.334 00:11:04.334 Disk stats (read/write): 00:11:04.334 nvme0n1: ios=58/512, merge=0/0, ticks=999/417, in_queue=1416, util=84.07% 00:11:04.334 nvme0n2: ios=36/512, merge=0/0, ticks=1224/424, in_queue=1648, util=88.06% 00:11:04.334 nvme0n3: ios=65/512, merge=0/0, ticks=1377/328, in_queue=1705, util=92.49% 00:11:04.334 nvme0n4: ios=66/512, merge=0/0, ticks=1184/293, in_queue=1477, util=94.44% 00:11:04.334 19:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:04.334 [global] 00:11:04.334 thread=1 00:11:04.334 invalidate=1 00:11:04.334 rw=randwrite 00:11:04.334 time_based=1 00:11:04.334 runtime=1 00:11:04.334 ioengine=libaio 00:11:04.334 direct=1 00:11:04.334 bs=4096 00:11:04.334 iodepth=1 00:11:04.334 norandommap=0 00:11:04.334 numjobs=1 00:11:04.334 00:11:04.334 verify_dump=1 00:11:04.334 verify_backlog=512 00:11:04.334 verify_state_save=0 00:11:04.334 do_verify=1 00:11:04.334 verify=crc32c-intel 00:11:04.334 [job0] 00:11:04.334 filename=/dev/nvme0n1 00:11:04.334 [job1] 00:11:04.334 filename=/dev/nvme0n2 00:11:04.334 [job2] 00:11:04.334 filename=/dev/nvme0n3 00:11:04.334 [job3] 00:11:04.334 filename=/dev/nvme0n4 00:11:04.334 Could not set queue depth (nvme0n1) 00:11:04.334 Could not set queue depth (nvme0n2) 00:11:04.334 Could not set queue depth (nvme0n3) 00:11:04.334 Could not set queue depth (nvme0n4) 00:11:04.595 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.595 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.595 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.595 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.595 fio-3.35 00:11:04.595 Starting 4 threads 00:11:06.006 00:11:06.006 job0: (groupid=0, jobs=1): err= 0: pid=3558214: Wed Jul 24 19:50:53 2024 00:11:06.006 read: IOPS=422, BW=1690KiB/s (1731kB/s)(1692KiB/1001msec) 00:11:06.006 slat (nsec): min=7478, max=47732, avg=25463.20, stdev=3234.14 00:11:06.006 clat (usec): min=899, max=1538, avg=1208.48, stdev=72.38 00:11:06.006 lat (usec): min=924, max=1562, avg=1233.94, stdev=72.75 00:11:06.006 clat percentiles (usec): 00:11:06.006 | 1.00th=[ 1004], 5.00th=[ 1074], 10.00th=[ 1106], 20.00th=[ 1156], 00:11:06.006 | 30.00th=[ 1188], 40.00th=[ 1205], 50.00th=[ 1221], 60.00th=[ 1237], 00:11:06.006 | 70.00th=[ 1254], 80.00th=[ 1270], 90.00th=[ 1287], 95.00th=[ 1319], 00:11:06.006 | 99.00th=[ 1336], 99.50th=[ 1352], 99.90th=[ 1532], 99.95th=[ 1532], 00:11:06.006 | 99.99th=[ 1532] 00:11:06.006 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:11:06.006 slat (usec): min=10, max=1610, avg=34.90, stdev=69.98 00:11:06.006 clat (usec): min=515, max=1309, avg=883.34, stdev=100.28 00:11:06.006 lat (usec): min=548, max=2433, avg=918.24, stdev=121.40 00:11:06.006 clat percentiles (usec): 00:11:06.006 | 1.00th=[ 611], 5.00th=[ 709], 10.00th=[ 758], 20.00th=[ 799], 00:11:06.006 | 30.00th=[ 832], 40.00th=[ 873], 50.00th=[ 906], 60.00th=[ 922], 00:11:06.006 | 70.00th=[ 938], 80.00th=[ 963], 90.00th=[ 996], 95.00th=[ 1020], 00:11:06.006 | 99.00th=[ 1074], 99.50th=[ 1123], 99.90th=[ 1303], 99.95th=[ 1303], 00:11:06.006 | 99.99th=[ 1303] 00:11:06.006 bw ( KiB/s): min= 4096, max= 4096, per=51.40%, avg=4096.00, stdev= 0.00, samples=1 00:11:06.006 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:06.006 lat (usec) : 750=5.03%, 1000=45.24% 00:11:06.006 lat (msec) : 2=49.73% 00:11:06.006 cpu : usr=1.40%, sys=2.90%, ctx=938, majf=0, minf=1 00:11:06.006 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.006 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.006 issued rwts: total=423,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.006 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.006 job1: (groupid=0, jobs=1): err= 0: pid=3558215: Wed Jul 24 19:50:53 2024 00:11:06.006 read: IOPS=378, BW=1513KiB/s (1549kB/s)(1516KiB/1002msec) 00:11:06.006 slat (nsec): min=12289, max=43203, avg=26218.91, stdev=3020.33 00:11:06.006 clat (usec): min=804, max=1921, avg=1328.63, stdev=114.57 00:11:06.006 lat (usec): min=830, max=1947, avg=1354.85, stdev=114.79 00:11:06.006 clat percentiles (usec): 00:11:06.006 | 1.00th=[ 1020], 5.00th=[ 1139], 10.00th=[ 1205], 20.00th=[ 1254], 00:11:06.006 | 30.00th=[ 1287], 40.00th=[ 1319], 50.00th=[ 1336], 60.00th=[ 1352], 00:11:06.006 | 70.00th=[ 1369], 80.00th=[ 1401], 90.00th=[ 1450], 95.00th=[ 1516], 00:11:06.006 | 99.00th=[ 1680], 99.50th=[ 1926], 99.90th=[ 1926], 99.95th=[ 1926], 00:11:06.006 | 99.99th=[ 1926] 00:11:06.006 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:11:06.006 slat (nsec): min=8891, max=66835, avg=31855.07, stdev=4447.79 00:11:06.006 clat (usec): min=568, max=1317, avg=905.38, stdev=108.93 00:11:06.006 lat (usec): min=601, max=1349, avg=937.24, stdev=109.32 00:11:06.006 clat percentiles (usec): 00:11:06.006 | 1.00th=[ 635], 5.00th=[ 725], 10.00th=[ 775], 20.00th=[ 816], 00:11:06.006 | 30.00th=[ 857], 40.00th=[ 889], 50.00th=[ 914], 60.00th=[ 938], 00:11:06.006 | 70.00th=[ 955], 80.00th=[ 979], 90.00th=[ 1020], 95.00th=[ 1074], 00:11:06.006 | 99.00th=[ 1205], 99.50th=[ 1270], 99.90th=[ 1319], 99.95th=[ 1319], 00:11:06.006 | 99.99th=[ 1319] 00:11:06.006 bw ( KiB/s): min= 4096, max= 4096, per=51.40%, avg=4096.00, stdev= 0.00, samples=1 00:11:06.006 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:06.006 lat (usec) : 750=4.26%, 1000=45.01% 00:11:06.006 lat (msec) : 2=50.73% 00:11:06.006 cpu : usr=2.00%, sys=3.60%, ctx=891, majf=0, minf=1 00:11:06.006 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.006 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.006 issued rwts: total=379,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.006 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.006 job2: (groupid=0, jobs=1): err= 0: pid=3558216: Wed Jul 24 19:50:53 2024 00:11:06.006 read: IOPS=14, BW=58.4KiB/s (59.8kB/s)(60.0KiB/1028msec) 00:11:06.006 slat (nsec): min=26439, max=27166, avg=26715.27, stdev=186.82 00:11:06.006 clat (usec): min=41789, max=42038, avg=41952.98, stdev=58.69 00:11:06.006 lat (usec): min=41816, max=42065, avg=41979.70, stdev=58.67 00:11:06.006 clat percentiles (usec): 00:11:06.006 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:11:06.006 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:11:06.006 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:06.006 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:06.006 | 99.99th=[42206] 00:11:06.006 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:11:06.007 slat (nsec): min=9833, max=53532, avg=30004.21, stdev=8698.35 00:11:06.007 clat (usec): min=378, max=1066, avg=739.06, stdev=146.94 00:11:06.007 lat (usec): min=410, max=1100, avg=769.07, stdev=149.84 00:11:06.007 clat percentiles (usec): 00:11:06.007 | 1.00th=[ 465], 5.00th=[ 545], 10.00th=[ 570], 20.00th=[ 611], 00:11:06.007 | 30.00th=[ 660], 40.00th=[ 685], 50.00th=[ 701], 60.00th=[ 717], 00:11:06.007 | 70.00th=[ 791], 80.00th=[ 914], 90.00th=[ 971], 95.00th=[ 996], 00:11:06.007 | 99.00th=[ 1045], 99.50th=[ 1057], 99.90th=[ 1074], 99.95th=[ 1074], 00:11:06.007 | 99.99th=[ 1074] 00:11:06.007 bw ( KiB/s): min= 4096, max= 4096, per=51.40%, avg=4096.00, stdev= 0.00, samples=1 00:11:06.007 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:06.007 lat (usec) : 500=1.52%, 750=63.19%, 1000=27.70% 00:11:06.007 lat (msec) : 2=4.74%, 50=2.85% 00:11:06.007 cpu : usr=0.49%, sys=1.75%, ctx=528, majf=0, minf=1 00:11:06.007 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.007 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.007 issued rwts: total=15,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.007 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.007 job3: (groupid=0, jobs=1): err= 0: pid=3558217: Wed Jul 24 19:50:53 2024 00:11:06.007 read: IOPS=13, BW=55.0KiB/s (56.3kB/s)(56.0KiB/1018msec) 00:11:06.007 slat (nsec): min=25403, max=26240, avg=25900.50, stdev=240.56 00:11:06.007 clat (usec): min=41093, max=42079, avg=41793.70, stdev=319.07 00:11:06.007 lat (usec): min=41119, max=42105, avg=41819.60, stdev=319.07 00:11:06.007 clat percentiles (usec): 00:11:06.007 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:11:06.007 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:11:06.007 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:06.007 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:06.007 | 99.99th=[42206] 00:11:06.007 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:11:06.007 slat (nsec): min=9532, max=52759, avg=28372.28, stdev=9028.55 00:11:06.007 clat (usec): min=360, max=1077, avg=807.95, stdev=146.29 00:11:06.007 lat (usec): min=370, max=1110, avg=836.33, stdev=149.33 00:11:06.007 clat percentiles (usec): 00:11:06.007 | 1.00th=[ 474], 5.00th=[ 553], 10.00th=[ 594], 20.00th=[ 668], 00:11:06.007 | 30.00th=[ 725], 40.00th=[ 783], 50.00th=[ 832], 60.00th=[ 873], 00:11:06.007 | 70.00th=[ 914], 80.00th=[ 947], 90.00th=[ 988], 95.00th=[ 1004], 00:11:06.007 | 99.00th=[ 1057], 99.50th=[ 1074], 99.90th=[ 1074], 99.95th=[ 1074], 00:11:06.007 | 99.99th=[ 1074] 00:11:06.007 bw ( KiB/s): min= 4087, max= 4087, per=51.29%, avg=4087.00, stdev= 0.00, samples=1 00:11:06.007 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:11:06.007 lat (usec) : 500=2.28%, 750=30.99%, 1000=57.79% 00:11:06.007 lat (msec) : 2=6.27%, 50=2.66% 00:11:06.007 cpu : usr=0.88%, sys=1.28%, ctx=529, majf=0, minf=1 00:11:06.007 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.007 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.007 issued rwts: total=14,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.007 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.007 00:11:06.007 Run status group 0 (all jobs): 00:11:06.007 READ: bw=3233KiB/s (3311kB/s), 55.0KiB/s-1690KiB/s (56.3kB/s-1731kB/s), io=3324KiB (3404kB), run=1001-1028msec 00:11:06.007 WRITE: bw=7969KiB/s (8160kB/s), 1992KiB/s-2046KiB/s (2040kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1028msec 00:11:06.007 00:11:06.007 Disk stats (read/write): 00:11:06.007 nvme0n1: ios=328/512, merge=0/0, ticks=954/427, in_queue=1381, util=100.00% 00:11:06.007 nvme0n2: ios=307/512, merge=0/0, ticks=340/370, in_queue=710, util=87.77% 00:11:06.007 nvme0n3: ios=38/512, merge=0/0, ticks=712/375, in_queue=1087, util=95.79% 00:11:06.007 nvme0n4: ios=66/512, merge=0/0, ticks=1510/405, in_queue=1915, util=97.01% 00:11:06.007 19:50:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:06.007 [global] 00:11:06.007 thread=1 00:11:06.007 invalidate=1 00:11:06.007 rw=write 00:11:06.007 time_based=1 00:11:06.007 runtime=1 00:11:06.007 ioengine=libaio 00:11:06.007 direct=1 00:11:06.007 bs=4096 00:11:06.007 iodepth=128 00:11:06.007 norandommap=0 00:11:06.007 numjobs=1 00:11:06.007 00:11:06.007 verify_dump=1 00:11:06.007 verify_backlog=512 00:11:06.007 verify_state_save=0 00:11:06.007 do_verify=1 00:11:06.007 verify=crc32c-intel 00:11:06.007 [job0] 00:11:06.007 filename=/dev/nvme0n1 00:11:06.007 [job1] 00:11:06.007 filename=/dev/nvme0n2 00:11:06.007 [job2] 00:11:06.007 filename=/dev/nvme0n3 00:11:06.007 [job3] 00:11:06.007 filename=/dev/nvme0n4 00:11:06.007 Could not set queue depth (nvme0n1) 00:11:06.007 Could not set queue depth (nvme0n2) 00:11:06.007 Could not set queue depth (nvme0n3) 00:11:06.007 Could not set queue depth (nvme0n4) 00:11:06.267 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:06.267 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:06.267 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:06.267 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:06.267 fio-3.35 00:11:06.267 Starting 4 threads 00:11:07.279 00:11:07.279 job0: (groupid=0, jobs=1): err= 0: pid=3558741: Wed Jul 24 19:50:55 2024 00:11:07.279 read: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec) 00:11:07.279 slat (nsec): min=896, max=21435k, avg=105500.03, stdev=859373.86 00:11:07.279 clat (usec): min=2378, max=40340, avg=14593.04, stdev=6045.86 00:11:07.279 lat (usec): min=2386, max=40349, avg=14698.54, stdev=6085.44 00:11:07.279 clat percentiles (usec): 00:11:07.279 | 1.00th=[ 3621], 5.00th=[ 7308], 10.00th=[ 8848], 20.00th=[ 9503], 00:11:07.279 | 30.00th=[10421], 40.00th=[11469], 50.00th=[13042], 60.00th=[15008], 00:11:07.279 | 70.00th=[17171], 80.00th=[19006], 90.00th=[23200], 95.00th=[25822], 00:11:07.279 | 99.00th=[31327], 99.50th=[36963], 99.90th=[40109], 99.95th=[40109], 00:11:07.279 | 99.99th=[40109] 00:11:07.279 write: IOPS=4929, BW=19.3MiB/s (20.2MB/s)(19.4MiB/1009msec); 0 zone resets 00:11:07.280 slat (nsec): min=1539, max=8819.2k, avg=88160.95, stdev=510266.73 00:11:07.280 clat (usec): min=1359, max=40891, avg=12230.97, stdev=5962.48 00:11:07.280 lat (usec): min=1369, max=40902, avg=12319.13, stdev=5989.56 00:11:07.280 clat percentiles (usec): 00:11:07.280 | 1.00th=[ 3032], 5.00th=[ 6718], 10.00th=[ 7373], 20.00th=[ 8029], 00:11:07.280 | 30.00th=[ 8979], 40.00th=[ 9896], 50.00th=[10683], 60.00th=[11731], 00:11:07.280 | 70.00th=[13173], 80.00th=[14746], 90.00th=[18744], 95.00th=[26084], 00:11:07.280 | 99.00th=[35390], 99.50th=[39060], 99.90th=[40633], 99.95th=[40633], 00:11:07.280 | 99.99th=[40633] 00:11:07.280 bw ( KiB/s): min=18296, max=20480, per=19.32%, avg=19388.00, stdev=1544.32, samples=2 00:11:07.280 iops : min= 4574, max= 5120, avg=4847.00, stdev=386.08, samples=2 00:11:07.280 lat (msec) : 2=0.07%, 4=1.24%, 10=32.34%, 20=52.57%, 50=13.78% 00:11:07.280 cpu : usr=3.87%, sys=4.66%, ctx=397, majf=0, minf=1 00:11:07.280 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:07.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:07.280 issued rwts: total=4608,4974,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.280 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:07.280 job1: (groupid=0, jobs=1): err= 0: pid=3558744: Wed Jul 24 19:50:55 2024 00:11:07.280 read: IOPS=7626, BW=29.8MiB/s (31.2MB/s)(30.0MiB/1007msec) 00:11:07.280 slat (nsec): min=933, max=10487k, avg=61699.65, stdev=423463.87 00:11:07.280 clat (usec): min=1284, max=27406, avg=8087.19, stdev=2456.30 00:11:07.280 lat (usec): min=1293, max=27414, avg=8148.89, stdev=2479.57 00:11:07.280 clat percentiles (usec): 00:11:07.280 | 1.00th=[ 3851], 5.00th=[ 5276], 10.00th=[ 5669], 20.00th=[ 6521], 00:11:07.280 | 30.00th=[ 6849], 40.00th=[ 7177], 50.00th=[ 7635], 60.00th=[ 8094], 00:11:07.280 | 70.00th=[ 8717], 80.00th=[ 9896], 90.00th=[10552], 95.00th=[11469], 00:11:07.280 | 99.00th=[17171], 99.50th=[20317], 99.90th=[27132], 99.95th=[27395], 00:11:07.280 | 99.99th=[27395] 00:11:07.280 write: IOPS=7992, BW=31.2MiB/s (32.7MB/s)(31.4MiB/1007msec); 0 zone resets 00:11:07.280 slat (nsec): min=1557, max=6177.1k, avg=60097.75, stdev=338023.35 00:11:07.280 clat (usec): min=1186, max=27641, avg=8151.37, stdev=4183.89 00:11:07.280 lat (usec): min=1197, max=27643, avg=8211.47, stdev=4202.01 00:11:07.280 clat percentiles (usec): 00:11:07.280 | 1.00th=[ 2737], 5.00th=[ 3982], 10.00th=[ 4621], 20.00th=[ 5669], 00:11:07.280 | 30.00th=[ 6194], 40.00th=[ 6521], 50.00th=[ 6783], 60.00th=[ 7177], 00:11:07.280 | 70.00th=[ 8160], 80.00th=[10159], 90.00th=[13829], 95.00th=[16581], 00:11:07.280 | 99.00th=[25297], 99.50th=[26346], 99.90th=[27395], 99.95th=[27657], 00:11:07.280 | 99.99th=[27657] 00:11:07.280 bw ( KiB/s): min=29216, max=34144, per=31.57%, avg=31680.00, stdev=3484.62, samples=2 00:11:07.280 iops : min= 7304, max= 8536, avg=7920.00, stdev=871.16, samples=2 00:11:07.280 lat (msec) : 2=0.19%, 4=2.97%, 10=78.05%, 20=16.93%, 50=1.86% 00:11:07.280 cpu : usr=4.08%, sys=6.66%, ctx=719, majf=0, minf=1 00:11:07.280 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:07.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:07.280 issued rwts: total=7680,8048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.280 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:07.280 job2: (groupid=0, jobs=1): err= 0: pid=3558745: Wed Jul 24 19:50:55 2024 00:11:07.280 read: IOPS=5139, BW=20.1MiB/s (21.1MB/s)(20.3MiB/1009msec) 00:11:07.280 slat (nsec): min=931, max=15557k, avg=85959.63, stdev=621378.13 00:11:07.280 clat (usec): min=1798, max=70781, avg=11192.88, stdev=6731.05 00:11:07.280 lat (usec): min=1804, max=70788, avg=11278.84, stdev=6793.90 00:11:07.280 clat percentiles (usec): 00:11:07.280 | 1.00th=[ 4424], 5.00th=[ 6194], 10.00th=[ 6718], 20.00th=[ 7701], 00:11:07.280 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[10028], 60.00th=[10814], 00:11:07.280 | 70.00th=[11863], 80.00th=[12780], 90.00th=[15139], 95.00th=[17171], 00:11:07.280 | 99.00th=[50070], 99.50th=[64750], 99.90th=[68682], 99.95th=[70779], 00:11:07.280 | 99.99th=[70779] 00:11:07.280 write: IOPS=5581, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1009msec); 0 zone resets 00:11:07.280 slat (nsec): min=1655, max=13355k, avg=88215.38, stdev=545928.24 00:11:07.280 clat (usec): min=1609, max=74155, avg=12237.35, stdev=12059.14 00:11:07.280 lat (usec): min=1919, max=74174, avg=12325.56, stdev=12135.87 00:11:07.280 clat percentiles (usec): 00:11:07.280 | 1.00th=[ 3851], 5.00th=[ 5145], 10.00th=[ 5932], 20.00th=[ 6980], 00:11:07.280 | 30.00th=[ 8094], 40.00th=[ 8455], 50.00th=[ 9241], 60.00th=[ 9896], 00:11:07.280 | 70.00th=[10814], 80.00th=[12518], 90.00th=[15401], 95.00th=[43779], 00:11:07.280 | 99.00th=[69731], 99.50th=[70779], 99.90th=[73925], 99.95th=[73925], 00:11:07.280 | 99.99th=[73925] 00:11:07.280 bw ( KiB/s): min=17096, max=27464, per=22.20%, avg=22280.00, stdev=7331.28, samples=2 00:11:07.280 iops : min= 4274, max= 6866, avg=5570.00, stdev=1832.82, samples=2 00:11:07.280 lat (msec) : 2=0.13%, 4=0.79%, 10=54.40%, 20=39.91%, 50=1.85% 00:11:07.280 lat (msec) : 100=2.93% 00:11:07.280 cpu : usr=3.57%, sys=5.26%, ctx=533, majf=0, minf=1 00:11:07.280 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:07.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:07.280 issued rwts: total=5186,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.280 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:07.280 job3: (groupid=0, jobs=1): err= 0: pid=3558746: Wed Jul 24 19:50:55 2024 00:11:07.280 read: IOPS=6590, BW=25.7MiB/s (27.0MB/s)(26.0MiB/1010msec) 00:11:07.280 slat (nsec): min=905, max=8493.3k, avg=82466.29, stdev=595018.20 00:11:07.280 clat (usec): min=3985, max=25741, avg=10580.66, stdev=3508.84 00:11:07.280 lat (usec): min=3987, max=27491, avg=10663.13, stdev=3541.79 00:11:07.280 clat percentiles (usec): 00:11:07.280 | 1.00th=[ 6128], 5.00th=[ 6915], 10.00th=[ 7439], 20.00th=[ 7963], 00:11:07.280 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[10290], 00:11:07.280 | 70.00th=[11469], 80.00th=[13304], 90.00th=[15270], 95.00th=[17433], 00:11:07.280 | 99.00th=[23462], 99.50th=[25560], 99.90th=[25822], 99.95th=[25822], 00:11:07.280 | 99.99th=[25822] 00:11:07.280 write: IOPS=6615, BW=25.8MiB/s (27.1MB/s)(26.1MiB/1010msec); 0 zone resets 00:11:07.280 slat (nsec): min=1581, max=9257.6k, avg=63682.19, stdev=413223.52 00:11:07.280 clat (usec): min=1229, max=20631, avg=8629.94, stdev=2715.15 00:11:07.280 lat (usec): min=1241, max=20639, avg=8693.62, stdev=2718.41 00:11:07.280 clat percentiles (usec): 00:11:07.280 | 1.00th=[ 2999], 5.00th=[ 5014], 10.00th=[ 5800], 20.00th=[ 6718], 00:11:07.280 | 30.00th=[ 7308], 40.00th=[ 7635], 50.00th=[ 8029], 60.00th=[ 8455], 00:11:07.280 | 70.00th=[ 9241], 80.00th=[10814], 90.00th=[12387], 95.00th=[14222], 00:11:07.280 | 99.00th=[16188], 99.50th=[16909], 99.90th=[19006], 99.95th=[19530], 00:11:07.280 | 99.99th=[20579] 00:11:07.280 bw ( KiB/s): min=24208, max=29040, per=26.53%, avg=26624.00, stdev=3416.74, samples=2 00:11:07.280 iops : min= 6052, max= 7260, avg=6656.00, stdev=854.18, samples=2 00:11:07.280 lat (msec) : 2=0.01%, 4=1.66%, 10=65.96%, 20=30.89%, 50=1.47% 00:11:07.280 cpu : usr=4.16%, sys=5.75%, ctx=519, majf=0, minf=1 00:11:07.280 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:11:07.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:07.280 issued rwts: total=6656,6682,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.280 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:07.280 00:11:07.280 Run status group 0 (all jobs): 00:11:07.280 READ: bw=93.3MiB/s (97.9MB/s), 17.8MiB/s-29.8MiB/s (18.7MB/s-31.2MB/s), io=94.3MiB (98.8MB), run=1007-1010msec 00:11:07.280 WRITE: bw=98.0MiB/s (103MB/s), 19.3MiB/s-31.2MiB/s (20.2MB/s-32.7MB/s), io=99.0MiB (104MB), run=1007-1010msec 00:11:07.280 00:11:07.280 Disk stats (read/write): 00:11:07.280 nvme0n1: ios=4146/4423, merge=0/0, ticks=50183/44138, in_queue=94321, util=86.27% 00:11:07.280 nvme0n2: ios=6194/6575, merge=0/0, ticks=49497/53774, in_queue=103271, util=92.76% 00:11:07.280 nvme0n3: ios=4140/4608, merge=0/0, ticks=44953/57348, in_queue=102301, util=99.05% 00:11:07.280 nvme0n4: ios=5689/5732, merge=0/0, ticks=55339/47143, in_queue=102482, util=97.33% 00:11:07.280 19:50:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:07.541 [global] 00:11:07.541 thread=1 00:11:07.541 invalidate=1 00:11:07.541 rw=randwrite 00:11:07.541 time_based=1 00:11:07.541 runtime=1 00:11:07.541 ioengine=libaio 00:11:07.541 direct=1 00:11:07.541 bs=4096 00:11:07.541 iodepth=128 00:11:07.541 norandommap=0 00:11:07.541 numjobs=1 00:11:07.541 00:11:07.541 verify_dump=1 00:11:07.541 verify_backlog=512 00:11:07.541 verify_state_save=0 00:11:07.541 do_verify=1 00:11:07.541 verify=crc32c-intel 00:11:07.541 [job0] 00:11:07.541 filename=/dev/nvme0n1 00:11:07.541 [job1] 00:11:07.541 filename=/dev/nvme0n2 00:11:07.541 [job2] 00:11:07.541 filename=/dev/nvme0n3 00:11:07.541 [job3] 00:11:07.541 filename=/dev/nvme0n4 00:11:07.541 Could not set queue depth (nvme0n1) 00:11:07.541 Could not set queue depth (nvme0n2) 00:11:07.541 Could not set queue depth (nvme0n3) 00:11:07.541 Could not set queue depth (nvme0n4) 00:11:07.801 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:07.801 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:07.801 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:07.801 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:07.801 fio-3.35 00:11:07.801 Starting 4 threads 00:11:09.188 00:11:09.188 job0: (groupid=0, jobs=1): err= 0: pid=3559285: Wed Jul 24 19:50:56 2024 00:11:09.188 read: IOPS=5438, BW=21.2MiB/s (22.3MB/s)(21.4MiB/1006msec) 00:11:09.188 slat (nsec): min=877, max=13099k, avg=81195.94, stdev=578697.12 00:11:09.188 clat (usec): min=3280, max=35208, avg=11378.53, stdev=4291.66 00:11:09.188 lat (usec): min=3300, max=35211, avg=11459.73, stdev=4331.84 00:11:09.188 clat percentiles (usec): 00:11:09.188 | 1.00th=[ 3589], 5.00th=[ 6915], 10.00th=[ 7373], 20.00th=[ 8225], 00:11:09.188 | 30.00th=[ 9110], 40.00th=[ 9765], 50.00th=[10552], 60.00th=[11207], 00:11:09.188 | 70.00th=[12256], 80.00th=[13566], 90.00th=[16581], 95.00th=[19792], 00:11:09.188 | 99.00th=[28705], 99.50th=[30016], 99.90th=[34866], 99.95th=[35390], 00:11:09.188 | 99.99th=[35390] 00:11:09.188 write: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec); 0 zone resets 00:11:09.188 slat (nsec): min=1493, max=8599.2k, avg=78940.84, stdev=449152.95 00:11:09.188 clat (usec): min=752, max=35208, avg=11623.67, stdev=5841.68 00:11:09.188 lat (usec): min=770, max=35221, avg=11702.61, stdev=5872.70 00:11:09.188 clat percentiles (usec): 00:11:09.188 | 1.00th=[ 2573], 5.00th=[ 4752], 10.00th=[ 5735], 20.00th=[ 6849], 00:11:09.188 | 30.00th=[ 7963], 40.00th=[ 8979], 50.00th=[10159], 60.00th=[11600], 00:11:09.188 | 70.00th=[13829], 80.00th=[15139], 90.00th=[19792], 95.00th=[22414], 00:11:09.188 | 99.00th=[30802], 99.50th=[33817], 99.90th=[34866], 99.95th=[34866], 00:11:09.188 | 99.99th=[35390] 00:11:09.188 bw ( KiB/s): min=21456, max=23600, per=22.00%, avg=22528.00, stdev=1516.04, samples=2 00:11:09.188 iops : min= 5364, max= 5900, avg=5632.00, stdev=379.01, samples=2 00:11:09.188 lat (usec) : 1000=0.09% 00:11:09.188 lat (msec) : 2=0.30%, 4=1.69%, 10=44.04%, 20=46.59%, 50=7.29% 00:11:09.188 cpu : usr=3.78%, sys=4.28%, ctx=582, majf=0, minf=1 00:11:09.188 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:09.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:09.188 issued rwts: total=5471,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.188 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:09.188 job1: (groupid=0, jobs=1): err= 0: pid=3559286: Wed Jul 24 19:50:56 2024 00:11:09.188 read: IOPS=8078, BW=31.6MiB/s (33.1MB/s)(32.0MiB/1014msec) 00:11:09.188 slat (nsec): min=943, max=12723k, avg=61446.18, stdev=461608.86 00:11:09.188 clat (usec): min=1627, max=24893, avg=8129.47, stdev=2484.09 00:11:09.188 lat (usec): min=1634, max=24904, avg=8190.92, stdev=2503.44 00:11:09.188 clat percentiles (usec): 00:11:09.188 | 1.00th=[ 3261], 5.00th=[ 5538], 10.00th=[ 5932], 20.00th=[ 6390], 00:11:09.188 | 30.00th=[ 6718], 40.00th=[ 6980], 50.00th=[ 7439], 60.00th=[ 8029], 00:11:09.188 | 70.00th=[ 8717], 80.00th=[ 9896], 90.00th=[11076], 95.00th=[12125], 00:11:09.188 | 99.00th=[16909], 99.50th=[20579], 99.90th=[20841], 99.95th=[21365], 00:11:09.188 | 99.99th=[24773] 00:11:09.188 write: IOPS=8301, BW=32.4MiB/s (34.0MB/s)(32.9MiB/1014msec); 0 zone resets 00:11:09.188 slat (nsec): min=1531, max=9616.7k, avg=52890.44, stdev=352430.42 00:11:09.188 clat (usec): min=1149, max=20840, avg=7318.91, stdev=2863.25 00:11:09.188 lat (usec): min=1158, max=20853, avg=7371.80, stdev=2862.73 00:11:09.188 clat percentiles (usec): 00:11:09.188 | 1.00th=[ 2671], 5.00th=[ 3752], 10.00th=[ 4490], 20.00th=[ 5669], 00:11:09.188 | 30.00th=[ 6325], 40.00th=[ 6652], 50.00th=[ 6849], 60.00th=[ 7111], 00:11:09.188 | 70.00th=[ 7373], 80.00th=[ 8225], 90.00th=[10159], 95.00th=[14615], 00:11:09.188 | 99.00th=[17695], 99.50th=[20055], 99.90th=[20055], 99.95th=[20841], 00:11:09.188 | 99.99th=[20841] 00:11:09.188 bw ( KiB/s): min=32312, max=34016, per=32.39%, avg=33164.00, stdev=1204.91, samples=2 00:11:09.188 iops : min= 8078, max= 8504, avg=8291.00, stdev=301.23, samples=2 00:11:09.188 lat (msec) : 2=0.07%, 4=3.87%, 10=81.05%, 20=14.55%, 50=0.46% 00:11:09.188 cpu : usr=4.05%, sys=6.42%, ctx=789, majf=0, minf=1 00:11:09.188 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:09.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:09.188 issued rwts: total=8192,8418,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.188 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:09.188 job2: (groupid=0, jobs=1): err= 0: pid=3559287: Wed Jul 24 19:50:56 2024 00:11:09.188 read: IOPS=4090, BW=16.0MiB/s (16.8MB/s)(16.1MiB/1007msec) 00:11:09.188 slat (nsec): min=896, max=19967k, avg=116334.69, stdev=865633.90 00:11:09.188 clat (usec): min=5785, max=49900, avg=14555.56, stdev=6444.89 00:11:09.188 lat (usec): min=5788, max=49924, avg=14671.89, stdev=6526.79 00:11:09.188 clat percentiles (usec): 00:11:09.188 | 1.00th=[ 7767], 5.00th=[ 8979], 10.00th=[ 9765], 20.00th=[10159], 00:11:09.188 | 30.00th=[10421], 40.00th=[11076], 50.00th=[11731], 60.00th=[12387], 00:11:09.188 | 70.00th=[14877], 80.00th=[21365], 90.00th=[23200], 95.00th=[27657], 00:11:09.188 | 99.00th=[36439], 99.50th=[36963], 99.90th=[36963], 99.95th=[41157], 00:11:09.188 | 99.99th=[50070] 00:11:09.188 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:11:09.188 slat (nsec): min=1476, max=11981k, avg=109095.64, stdev=522749.40 00:11:09.188 clat (usec): min=5366, max=51142, avg=14646.95, stdev=8213.79 00:11:09.188 lat (usec): min=5368, max=51144, avg=14756.05, stdev=8266.06 00:11:09.188 clat percentiles (usec): 00:11:09.188 | 1.00th=[ 6980], 5.00th=[ 8094], 10.00th=[ 8455], 20.00th=[ 9110], 00:11:09.188 | 30.00th=[ 9896], 40.00th=[10552], 50.00th=[11600], 60.00th=[13173], 00:11:09.188 | 70.00th=[14877], 80.00th=[16909], 90.00th=[26870], 95.00th=[33162], 00:11:09.188 | 99.00th=[45876], 99.50th=[46924], 99.90th=[51119], 99.95th=[51119], 00:11:09.188 | 99.99th=[51119] 00:11:09.188 bw ( KiB/s): min=13856, max=22176, per=17.60%, avg=18016.00, stdev=5883.13, samples=2 00:11:09.188 iops : min= 3464, max= 5544, avg=4504.00, stdev=1470.78, samples=2 00:11:09.188 lat (msec) : 10=25.31%, 20=55.53%, 50=19.09%, 100=0.07% 00:11:09.188 cpu : usr=2.68%, sys=3.48%, ctx=535, majf=0, minf=1 00:11:09.188 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:09.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:09.188 issued rwts: total=4119,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.188 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:09.188 job3: (groupid=0, jobs=1): err= 0: pid=3559288: Wed Jul 24 19:50:56 2024 00:11:09.188 read: IOPS=7027, BW=27.5MiB/s (28.8MB/s)(28.0MiB/1020msec) 00:11:09.188 slat (nsec): min=942, max=8940.8k, avg=68964.91, stdev=478310.75 00:11:09.188 clat (usec): min=3229, max=18003, avg=9001.68, stdev=2127.31 00:11:09.188 lat (usec): min=3233, max=18006, avg=9070.65, stdev=2146.21 00:11:09.188 clat percentiles (usec): 00:11:09.188 | 1.00th=[ 5735], 5.00th=[ 6194], 10.00th=[ 6652], 20.00th=[ 7308], 00:11:09.188 | 30.00th=[ 7767], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 9110], 00:11:09.188 | 70.00th=[ 9634], 80.00th=[10421], 90.00th=[12125], 95.00th=[13173], 00:11:09.188 | 99.00th=[15139], 99.50th=[15795], 99.90th=[16712], 99.95th=[17957], 00:11:09.188 | 99.99th=[17957] 00:11:09.188 write: IOPS=7302, BW=28.5MiB/s (29.9MB/s)(29.1MiB/1020msec); 0 zone resets 00:11:09.188 slat (nsec): min=1575, max=8786.2k, avg=63454.39, stdev=361693.09 00:11:09.188 clat (usec): min=1175, max=28196, avg=8651.82, stdev=3593.72 00:11:09.188 lat (usec): min=1216, max=28204, avg=8715.27, stdev=3601.26 00:11:09.188 clat percentiles (usec): 00:11:09.188 | 1.00th=[ 3425], 5.00th=[ 4686], 10.00th=[ 5342], 20.00th=[ 6194], 00:11:09.188 | 30.00th=[ 7177], 40.00th=[ 7635], 50.00th=[ 7963], 60.00th=[ 8225], 00:11:09.188 | 70.00th=[ 8717], 80.00th=[10159], 90.00th=[12911], 95.00th=[15795], 00:11:09.188 | 99.00th=[25560], 99.50th=[27657], 99.90th=[28181], 99.95th=[28181], 00:11:09.188 | 99.99th=[28181] 00:11:09.188 bw ( KiB/s): min=28672, max=29904, per=28.61%, avg=29288.00, stdev=871.16, samples=2 00:11:09.188 iops : min= 7168, max= 7476, avg=7322.00, stdev=217.79, samples=2 00:11:09.188 lat (msec) : 2=0.01%, 4=1.07%, 10=76.38%, 20=21.67%, 50=0.86% 00:11:09.188 cpu : usr=3.73%, sys=6.38%, ctx=805, majf=0, minf=1 00:11:09.188 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:09.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:09.188 issued rwts: total=7168,7449,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.188 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:09.188 00:11:09.188 Run status group 0 (all jobs): 00:11:09.188 READ: bw=95.5MiB/s (100MB/s), 16.0MiB/s-31.6MiB/s (16.8MB/s-33.1MB/s), io=97.5MiB (102MB), run=1006-1020msec 00:11:09.188 WRITE: bw=100.0MiB/s (105MB/s), 17.9MiB/s-32.4MiB/s (18.7MB/s-34.0MB/s), io=102MiB (107MB), run=1006-1020msec 00:11:09.188 00:11:09.188 Disk stats (read/write): 00:11:09.188 nvme0n1: ios=4489/4608, merge=0/0, ticks=51667/52091, in_queue=103758, util=89.18% 00:11:09.188 nvme0n2: ios=6704/7087, merge=0/0, ticks=52881/49430, in_queue=102311, util=90.11% 00:11:09.189 nvme0n3: ios=3640/4055, merge=0/0, ticks=25924/24968, in_queue=50892, util=93.68% 00:11:09.189 nvme0n4: ios=5953/6144, merge=0/0, ticks=52283/49996, in_queue=102279, util=96.05% 00:11:09.189 19:50:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:09.189 19:50:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3559618 00:11:09.189 19:50:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:09.189 19:50:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:09.189 [global] 00:11:09.189 thread=1 00:11:09.189 invalidate=1 00:11:09.189 rw=read 00:11:09.189 time_based=1 00:11:09.189 runtime=10 00:11:09.189 ioengine=libaio 00:11:09.189 direct=1 00:11:09.189 bs=4096 00:11:09.189 iodepth=1 00:11:09.189 norandommap=1 00:11:09.189 numjobs=1 00:11:09.189 00:11:09.189 [job0] 00:11:09.189 filename=/dev/nvme0n1 00:11:09.189 [job1] 00:11:09.189 filename=/dev/nvme0n2 00:11:09.189 [job2] 00:11:09.189 filename=/dev/nvme0n3 00:11:09.189 [job3] 00:11:09.189 filename=/dev/nvme0n4 00:11:09.189 Could not set queue depth (nvme0n1) 00:11:09.189 Could not set queue depth (nvme0n2) 00:11:09.189 Could not set queue depth (nvme0n3) 00:11:09.189 Could not set queue depth (nvme0n4) 00:11:09.450 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:09.450 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:09.450 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:09.450 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:09.450 fio-3.35 00:11:09.450 Starting 4 threads 00:11:12.000 19:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:12.262 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=3313664, buflen=4096 00:11:12.262 fio: pid=3559812, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:12.262 19:51:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:12.522 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=1134592, buflen=4096 00:11:12.522 fio: pid=3559811, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:12.522 19:51:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:12.522 19:51:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:12.522 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=286720, buflen=4096 00:11:12.522 fio: pid=3559808, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:12.522 19:51:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:12.522 19:51:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:12.784 19:51:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:12.784 19:51:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:12.784 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=331776, buflen=4096 00:11:12.784 fio: pid=3559810, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:12.784 00:11:12.784 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3559808: Wed Jul 24 19:51:00 2024 00:11:12.784 read: IOPS=24, BW=95.8KiB/s (98.1kB/s)(280KiB/2923msec) 00:11:12.784 slat (usec): min=23, max=223, avg=30.00, stdev=32.27 00:11:12.784 clat (usec): min=1199, max=42289, avg=41386.31, stdev=4873.21 00:11:12.784 lat (usec): min=1237, max=42314, avg=41416.37, stdev=4872.14 00:11:12.784 clat percentiles (usec): 00:11:12.784 | 1.00th=[ 1205], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:11:12.784 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:11:12.784 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:12.784 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:12.784 | 99.99th=[42206] 00:11:12.784 bw ( KiB/s): min= 96, max= 96, per=6.07%, avg=96.00, stdev= 0.00, samples=5 00:11:12.784 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:11:12.784 lat (msec) : 2=1.41%, 50=97.18% 00:11:12.784 cpu : usr=0.00%, sys=0.10%, ctx=73, majf=0, minf=1 00:11:12.784 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:12.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.784 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.784 issued rwts: total=71,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:12.784 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:12.784 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3559810: Wed Jul 24 19:51:00 2024 00:11:12.784 read: IOPS=26, BW=104KiB/s (106kB/s)(324KiB/3130msec) 00:11:12.784 slat (usec): min=7, max=10549, avg=396.67, stdev=1919.88 00:11:12.784 clat (usec): min=906, max=43076, avg=37907.46, stdev=12227.61 00:11:12.784 lat (usec): min=914, max=52022, avg=38308.71, stdev=12490.09 00:11:12.784 clat percentiles (usec): 00:11:12.784 | 1.00th=[ 906], 5.00th=[ 1270], 10.00th=[41681], 20.00th=[41681], 00:11:12.784 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:11:12.784 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:12.784 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:11:12.784 | 99.99th=[43254] 00:11:12.784 bw ( KiB/s): min= 88, max= 117, per=6.52%, avg=103.50, stdev=10.50, samples=6 00:11:12.784 iops : min= 22, max= 29, avg=25.83, stdev= 2.56, samples=6 00:11:12.784 lat (usec) : 1000=1.22% 00:11:12.784 lat (msec) : 2=8.54%, 50=89.02% 00:11:12.784 cpu : usr=0.00%, sys=0.16%, ctx=85, majf=0, minf=1 00:11:12.784 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:12.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.784 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.784 issued rwts: total=82,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:12.784 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:12.784 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3559811: Wed Jul 24 19:51:00 2024 00:11:12.784 read: IOPS=100, BW=402KiB/s (412kB/s)(1108KiB/2753msec) 00:11:12.784 slat (usec): min=6, max=11185, avg=97.29, stdev=855.47 00:11:12.784 clat (usec): min=332, max=42086, avg=9756.81, stdev=16766.37 00:11:12.784 lat (usec): min=364, max=42111, avg=9854.35, stdev=16750.05 00:11:12.784 clat percentiles (usec): 00:11:12.784 | 1.00th=[ 478], 5.00th=[ 603], 10.00th=[ 668], 20.00th=[ 717], 00:11:12.784 | 30.00th=[ 766], 40.00th=[ 791], 50.00th=[ 824], 60.00th=[ 963], 00:11:12.784 | 70.00th=[ 1254], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:11:12.784 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:12.784 | 99.99th=[42206] 00:11:12.784 bw ( KiB/s): min= 96, max= 136, per=6.64%, avg=105.60, stdev=17.34, samples=5 00:11:12.784 iops : min= 24, max= 34, avg=26.40, stdev= 4.34, samples=5 00:11:12.784 lat (usec) : 500=1.80%, 750=23.74%, 1000=34.53% 00:11:12.784 lat (msec) : 2=17.27%, 4=0.36%, 50=21.94% 00:11:12.784 cpu : usr=0.25%, sys=0.25%, ctx=281, majf=0, minf=1 00:11:12.784 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:12.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.784 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.784 issued rwts: total=278,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:12.784 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:12.784 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3559812: Wed Jul 24 19:51:00 2024 00:11:12.784 read: IOPS=313, BW=1252KiB/s (1282kB/s)(3236KiB/2585msec) 00:11:12.784 slat (nsec): min=6575, max=64187, avg=25931.28, stdev=5053.46 00:11:12.784 clat (usec): min=357, max=41077, avg=3131.13, stdev=9296.60 00:11:12.784 lat (usec): min=383, max=41103, avg=3157.06, stdev=9296.55 00:11:12.784 clat percentiles (usec): 00:11:12.784 | 1.00th=[ 578], 5.00th=[ 701], 10.00th=[ 742], 20.00th=[ 791], 00:11:12.784 | 30.00th=[ 816], 40.00th=[ 848], 50.00th=[ 873], 60.00th=[ 889], 00:11:12.784 | 70.00th=[ 906], 80.00th=[ 922], 90.00th=[ 963], 95.00th=[41157], 00:11:12.784 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:12.784 | 99.99th=[41157] 00:11:12.784 bw ( KiB/s): min= 96, max= 4528, per=75.09%, avg=1187.20, stdev=1914.35, samples=5 00:11:12.784 iops : min= 24, max= 1132, avg=296.80, stdev=478.59, samples=5 00:11:12.784 lat (usec) : 500=0.25%, 750=10.49%, 1000=81.85% 00:11:12.784 lat (msec) : 2=1.60%, 50=5.68% 00:11:12.784 cpu : usr=0.54%, sys=1.20%, ctx=810, majf=0, minf=2 00:11:12.784 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:12.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.784 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.784 issued rwts: total=810,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:12.784 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:12.784 00:11:12.784 Run status group 0 (all jobs): 00:11:12.784 READ: bw=1581KiB/s (1619kB/s), 95.8KiB/s-1252KiB/s (98.1kB/s-1282kB/s), io=4948KiB (5067kB), run=2585-3130msec 00:11:12.784 00:11:12.784 Disk stats (read/write): 00:11:12.784 nvme0n1: ios=68/0, merge=0/0, ticks=2815/0, in_queue=2815, util=94.59% 00:11:12.784 nvme0n2: ios=80/0, merge=0/0, ticks=3027/0, in_queue=3027, util=94.79% 00:11:12.784 nvme0n3: ios=147/0, merge=0/0, ticks=3222/0, in_queue=3222, util=99.55% 00:11:12.784 nvme0n4: ios=810/0, merge=0/0, ticks=2437/0, in_queue=2437, util=95.89% 00:11:13.045 19:51:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:13.045 19:51:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:13.045 19:51:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:13.045 19:51:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:13.305 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:13.305 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:13.305 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:13.305 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:13.566 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:13.566 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3559618 00:11:13.566 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:13.566 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:13.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.566 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:13.566 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:13.566 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:13.566 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:13.567 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:13.567 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:13.567 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:13.567 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:13.567 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:13.567 nvmf hotplug test: fio failed as expected 00:11:13.567 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:13.828 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:13.828 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:13.828 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:13.828 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:13.828 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:13.828 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:13.828 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:11:13.828 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:13.828 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:11:13.828 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:13.828 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:13.828 rmmod nvme_tcp 00:11:13.828 rmmod nvme_fabrics 00:11:13.828 rmmod nvme_keyring 00:11:13.828 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:13.828 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:11:13.828 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:11:13.828 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3556084 ']' 00:11:13.828 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3556084 00:11:13.828 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 3556084 ']' 00:11:13.828 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 3556084 00:11:13.828 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:11:13.828 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:13.828 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3556084 00:11:14.089 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:14.089 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:14.089 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3556084' 00:11:14.089 killing process with pid 3556084 00:11:14.089 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 3556084 00:11:14.089 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 3556084 00:11:14.089 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:14.089 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:14.089 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:14.089 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:14.089 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:14.089 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.089 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:14.089 19:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.638 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:16.638 00:11:16.638 real 0m28.503s 00:11:16.639 user 2m35.136s 00:11:16.639 sys 0m8.898s 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.639 ************************************ 00:11:16.639 END TEST nvmf_fio_target 00:11:16.639 ************************************ 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:16.639 ************************************ 00:11:16.639 START TEST nvmf_bdevio 00:11:16.639 ************************************ 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:16.639 * Looking for test storage... 00:11:16.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:11:16.639 19:51:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:23.230 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:23.230 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:23.230 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.230 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:23.231 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:23.231 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.231 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:23.231 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:11:23.231 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:23.231 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:23.231 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:23.231 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:23.231 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:23.231 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:23.231 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:23.231 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:23.231 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:23.231 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:23.231 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:23.231 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:23.231 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:23.231 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:23.231 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:23.231 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:23.231 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:23.231 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:23.231 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:23.231 19:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:23.231 19:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:23.231 19:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:23.231 19:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:23.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:23.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.698 ms 00:11:23.231 00:11:23.231 --- 10.0.0.2 ping statistics --- 00:11:23.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.231 rtt min/avg/max/mdev = 0.698/0.698/0.698/0.000 ms 00:11:23.231 19:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:23.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:23.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.363 ms 00:11:23.231 00:11:23.231 --- 10.0.0.1 ping statistics --- 00:11:23.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.231 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:11:23.231 19:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:23.231 19:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:11:23.231 19:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:23.231 19:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:23.231 19:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:23.231 19:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:23.231 19:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:23.231 19:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:23.231 19:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:23.231 19:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:23.231 19:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:23.231 19:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:23.231 19:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:23.231 19:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3565305 00:11:23.231 19:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3565305 00:11:23.231 19:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:23.231 19:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 3565305 ']' 00:11:23.231 19:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.231 19:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:23.231 19:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.231 19:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:23.231 19:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:23.492 [2024-07-24 19:51:11.214251] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:11:23.492 [2024-07-24 19:51:11.214350] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:23.492 EAL: No free 2048 kB hugepages reported on node 1 00:11:23.492 [2024-07-24 19:51:11.305135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:23.492 [2024-07-24 19:51:11.400718] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:23.492 [2024-07-24 19:51:11.400776] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:23.492 [2024-07-24 19:51:11.400784] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:23.492 [2024-07-24 19:51:11.400792] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:23.492 [2024-07-24 19:51:11.400799] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:23.492 [2024-07-24 19:51:11.400962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:23.492 [2024-07-24 19:51:11.401152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:23.492 [2024-07-24 19:51:11.401327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:23.492 [2024-07-24 19:51:11.401327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:24.063 19:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:24.063 19:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:11:24.063 19:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:24.063 19:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:24.063 19:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.324 19:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:24.324 19:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:24.324 19:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.324 19:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.324 [2024-07-24 19:51:12.045888] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:24.324 19:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.324 19:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:24.324 19:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.324 19:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.324 Malloc0 00:11:24.324 19:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.324 19:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:24.324 19:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.324 19:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.325 19:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.325 19:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:24.325 19:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.325 19:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.325 19:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.325 19:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:24.325 19:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.325 19:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.325 [2024-07-24 19:51:12.111508] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:24.325 19:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.325 19:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:24.325 19:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:24.325 19:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:11:24.325 19:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:11:24.325 19:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:24.325 19:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:24.325 { 00:11:24.325 "params": { 00:11:24.325 "name": "Nvme$subsystem", 00:11:24.325 "trtype": "$TEST_TRANSPORT", 00:11:24.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:24.325 "adrfam": "ipv4", 00:11:24.325 "trsvcid": "$NVMF_PORT", 00:11:24.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:24.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:24.325 "hdgst": ${hdgst:-false}, 00:11:24.325 "ddgst": ${ddgst:-false} 00:11:24.325 }, 00:11:24.325 "method": "bdev_nvme_attach_controller" 00:11:24.325 } 00:11:24.325 EOF 00:11:24.325 )") 00:11:24.325 19:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:11:24.325 19:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:11:24.325 19:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:11:24.325 19:51:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:24.325 "params": { 00:11:24.325 "name": "Nvme1", 00:11:24.325 "trtype": "tcp", 00:11:24.325 "traddr": "10.0.0.2", 00:11:24.325 "adrfam": "ipv4", 00:11:24.325 "trsvcid": "4420", 00:11:24.325 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:24.325 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:24.325 "hdgst": false, 00:11:24.325 "ddgst": false 00:11:24.325 }, 00:11:24.325 "method": "bdev_nvme_attach_controller" 00:11:24.325 }' 00:11:24.325 [2024-07-24 19:51:12.177082] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:11:24.325 [2024-07-24 19:51:12.177161] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3565742 ] 00:11:24.325 EAL: No free 2048 kB hugepages reported on node 1 00:11:24.325 [2024-07-24 19:51:12.243337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:24.585 [2024-07-24 19:51:12.319249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:24.585 [2024-07-24 19:51:12.319304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:24.585 [2024-07-24 19:51:12.319308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.845 I/O targets: 00:11:24.845 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:24.846 00:11:24.846 00:11:24.846 CUnit - A unit testing framework for C - Version 2.1-3 00:11:24.846 http://cunit.sourceforge.net/ 00:11:24.846 00:11:24.846 00:11:24.846 Suite: bdevio tests on: Nvme1n1 00:11:24.846 Test: blockdev write read block ...passed 00:11:24.846 Test: blockdev write zeroes read block ...passed 00:11:24.846 Test: blockdev write zeroes read no split ...passed 00:11:24.846 Test: blockdev write zeroes read split ...passed 00:11:25.105 Test: blockdev write zeroes read split partial ...passed 00:11:25.105 Test: blockdev reset ...[2024-07-24 19:51:12.830480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:25.105 [2024-07-24 19:51:12.830552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5bce0 (9): Bad file descriptor 00:11:25.105 [2024-07-24 19:51:12.851739] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:25.105 passed 00:11:25.105 Test: blockdev write read 8 blocks ...passed 00:11:25.105 Test: blockdev write read size > 128k ...passed 00:11:25.105 Test: blockdev write read invalid size ...passed 00:11:25.105 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:25.105 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:25.105 Test: blockdev write read max offset ...passed 00:11:25.105 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:25.105 Test: blockdev writev readv 8 blocks ...passed 00:11:25.105 Test: blockdev writev readv 30 x 1block ...passed 00:11:25.105 Test: blockdev writev readv block ...passed 00:11:25.105 Test: blockdev writev readv size > 128k ...passed 00:11:25.105 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:25.105 Test: blockdev comparev and writev ...[2024-07-24 19:51:13.030229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:25.105 [2024-07-24 19:51:13.030254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:25.105 [2024-07-24 19:51:13.030265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:25.105 [2024-07-24 19:51:13.030271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:25.105 [2024-07-24 19:51:13.030697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:25.105 [2024-07-24 19:51:13.030705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:25.105 [2024-07-24 19:51:13.030715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:25.105 [2024-07-24 19:51:13.030721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:25.105 [2024-07-24 19:51:13.031120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:25.105 [2024-07-24 19:51:13.031127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:25.105 [2024-07-24 19:51:13.031137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:25.105 [2024-07-24 19:51:13.031142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:25.105 [2024-07-24 19:51:13.031535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:25.105 [2024-07-24 19:51:13.031543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:25.105 [2024-07-24 19:51:13.031552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:25.105 [2024-07-24 19:51:13.031558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:25.365 passed 00:11:25.365 Test: blockdev nvme passthru rw ...passed 00:11:25.365 Test: blockdev nvme passthru vendor specific ...[2024-07-24 19:51:13.113765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:25.365 [2024-07-24 19:51:13.113774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:25.365 [2024-07-24 19:51:13.114039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:25.365 [2024-07-24 19:51:13.114046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:25.365 [2024-07-24 19:51:13.114352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:25.365 [2024-07-24 19:51:13.114363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:25.365 [2024-07-24 19:51:13.114513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:25.365 [2024-07-24 19:51:13.114520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:25.365 passed 00:11:25.365 Test: blockdev nvme admin passthru ...passed 00:11:25.365 Test: blockdev copy ...passed 00:11:25.365 00:11:25.365 Run Summary: Type Total Ran Passed Failed Inactive 00:11:25.365 suites 1 1 n/a 0 0 00:11:25.365 tests 23 23 23 0 0 00:11:25.365 asserts 152 152 152 0 n/a 00:11:25.365 00:11:25.365 Elapsed time = 1.022 seconds 00:11:25.365 19:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:25.365 19:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.365 19:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:25.365 19:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.365 19:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:25.365 19:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:25.365 19:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:25.365 19:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:11:25.365 19:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:25.365 19:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:11:25.365 19:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:25.365 19:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:25.365 rmmod nvme_tcp 00:11:25.625 rmmod nvme_fabrics 00:11:25.625 rmmod nvme_keyring 00:11:25.625 19:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:25.625 19:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:11:25.625 19:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:11:25.625 19:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3565305 ']' 00:11:25.625 19:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3565305 00:11:25.625 19:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 3565305 ']' 00:11:25.625 19:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 3565305 00:11:25.625 19:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:11:25.625 19:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:25.625 19:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3565305 00:11:25.625 19:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:11:25.625 19:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:11:25.625 19:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3565305' 00:11:25.625 killing process with pid 3565305 00:11:25.625 19:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 3565305 00:11:25.626 19:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 3565305 00:11:25.887 19:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:25.887 19:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:25.887 19:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:25.887 19:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:25.887 19:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:25.887 19:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.887 19:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:25.887 19:51:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.801 19:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:27.801 00:11:27.801 real 0m11.572s 00:11:27.801 user 0m12.992s 00:11:27.801 sys 0m5.823s 00:11:27.801 19:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:27.801 19:51:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:27.801 ************************************ 00:11:27.801 END TEST nvmf_bdevio 00:11:27.801 ************************************ 00:11:27.801 19:51:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:27.801 00:11:27.801 real 4m54.848s 00:11:27.801 user 11m33.930s 00:11:27.801 sys 1m43.600s 00:11:27.802 19:51:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:27.802 19:51:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:27.802 ************************************ 00:11:27.802 END TEST nvmf_target_core 00:11:27.802 ************************************ 00:11:27.802 19:51:15 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:27.802 19:51:15 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:27.802 19:51:15 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:27.802 19:51:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:28.063 ************************************ 00:11:28.063 START TEST nvmf_target_extra 00:11:28.063 ************************************ 00:11:28.063 19:51:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:28.063 * Looking for test storage... 00:11:28.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:28.063 19:51:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:28.063 19:51:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:28.063 19:51:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:28.063 19:51:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:28.063 19:51:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:28.063 19:51:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:28.063 19:51:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:28.063 19:51:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:28.063 19:51:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:28.063 19:51:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:28.063 19:51:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:28.063 19:51:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:28.063 19:51:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:28.063 19:51:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:28.063 19:51:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:28.063 19:51:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:28.063 19:51:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:28.063 19:51:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:28.063 19:51:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:28.063 19:51:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.063 19:51:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.063 19:51:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.063 19:51:15 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.063 19:51:15 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.063 19:51:15 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.063 19:51:15 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:28.063 19:51:15 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.064 19:51:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:11:28.064 19:51:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:28.064 19:51:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:28.064 19:51:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:28.064 19:51:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:28.064 19:51:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:28.064 19:51:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:28.064 19:51:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:28.064 19:51:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:28.064 19:51:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:28.064 19:51:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:28.064 19:51:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:28.064 19:51:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:28.064 19:51:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:28.064 19:51:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:28.064 19:51:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:28.064 ************************************ 00:11:28.064 START TEST nvmf_example 00:11:28.064 ************************************ 00:11:28.064 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:28.326 * Looking for test storage... 00:11:28.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:11:28.326 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:34.963 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:34.963 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:34.963 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:34.964 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.964 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:34.964 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.964 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:34.964 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:34.964 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.964 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:34.964 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:34.964 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.964 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:34.964 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.964 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:34.964 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.964 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:34.964 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:34.964 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.964 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:34.964 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:34.964 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.964 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:34.964 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:11:34.964 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:34.964 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:34.964 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:34.964 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:34.964 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:34.964 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:34.964 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:34.964 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:34.964 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:34.964 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:34.964 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:34.964 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:34.964 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:34.964 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:34.964 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:34.964 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:35.226 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:35.226 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:35.226 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:35.226 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:35.226 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:35.226 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:35.226 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:35.226 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:35.226 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.587 ms 00:11:35.226 00:11:35.226 --- 10.0.0.2 ping statistics --- 00:11:35.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.226 rtt min/avg/max/mdev = 0.587/0.587/0.587/0.000 ms 00:11:35.226 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:35.226 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:35.226 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:11:35.226 00:11:35.226 --- 10.0.0.1 ping statistics --- 00:11:35.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.226 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:11:35.226 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:35.226 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:11:35.226 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:35.226 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:35.226 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:35.226 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:35.226 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:35.226 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:35.226 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:35.226 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:35.226 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:35.226 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:35.226 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.226 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:35.226 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:35.226 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3570142 00:11:35.226 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:35.226 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:35.226 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3570142 00:11:35.226 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 3570142 ']' 00:11:35.226 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.226 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:35.226 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.226 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:35.226 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.487 EAL: No free 2048 kB hugepages reported on node 1 00:11:36.057 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:36.057 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:11:36.057 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:36.057 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:36.057 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:36.317 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:36.317 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.317 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:36.317 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.317 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:36.317 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.317 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:36.317 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.317 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:36.317 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:36.317 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.317 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:36.317 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.317 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:36.317 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:36.317 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.317 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:36.317 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.318 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:36.318 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.318 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:36.318 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.318 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:36.318 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:36.318 EAL: No free 2048 kB hugepages reported on node 1 00:11:48.550 Initializing NVMe Controllers 00:11:48.550 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:48.550 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:48.550 Initialization complete. Launching workers. 00:11:48.550 ======================================================== 00:11:48.550 Latency(us) 00:11:48.550 Device Information : IOPS MiB/s Average min max 00:11:48.550 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15050.37 58.79 4251.94 873.76 17282.08 00:11:48.550 ======================================================== 00:11:48.550 Total : 15050.37 58.79 4251.94 873.76 17282.08 00:11:48.550 00:11:48.550 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:48.550 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:48.550 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:48.550 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:11:48.550 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:48.550 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:11:48.550 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:48.550 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:48.550 rmmod nvme_tcp 00:11:48.550 rmmod nvme_fabrics 00:11:48.550 rmmod nvme_keyring 00:11:48.551 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:48.551 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:11:48.551 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:11:48.551 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3570142 ']' 00:11:48.551 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3570142 00:11:48.551 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 3570142 ']' 00:11:48.551 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 3570142 00:11:48.551 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:11:48.551 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:48.551 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3570142 00:11:48.551 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:11:48.551 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:11:48.551 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3570142' 00:11:48.551 killing process with pid 3570142 00:11:48.551 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 3570142 00:11:48.551 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 3570142 00:11:48.551 nvmf threads initialize successfully 00:11:48.551 bdev subsystem init successfully 00:11:48.551 created a nvmf target service 00:11:48.551 create targets's poll groups done 00:11:48.551 all subsystems of target started 00:11:48.551 nvmf target is running 00:11:48.551 all subsystems of target stopped 00:11:48.551 destroy targets's poll groups done 00:11:48.551 destroyed the nvmf target service 00:11:48.551 bdev subsystem finish successfully 00:11:48.551 nvmf threads destroy successfully 00:11:48.551 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:48.551 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:48.551 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:48.551 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:48.551 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:48.551 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.551 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:48.551 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.812 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:48.812 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:48.812 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:48.812 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:48.812 00:11:48.812 real 0m20.752s 00:11:48.812 user 0m46.450s 00:11:48.812 sys 0m6.320s 00:11:48.812 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:48.812 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:48.812 ************************************ 00:11:48.812 END TEST nvmf_example 00:11:48.812 ************************************ 00:11:48.812 19:51:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:48.812 19:51:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:48.812 19:51:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:48.812 19:51:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:48.812 ************************************ 00:11:48.812 START TEST nvmf_filesystem 00:11:48.812 ************************************ 00:11:48.812 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:49.077 * Looking for test storage... 00:11:49.077 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:11:49.077 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:49.078 #define SPDK_CONFIG_H 00:11:49.078 #define SPDK_CONFIG_APPS 1 00:11:49.078 #define SPDK_CONFIG_ARCH native 00:11:49.078 #undef SPDK_CONFIG_ASAN 00:11:49.078 #undef SPDK_CONFIG_AVAHI 00:11:49.078 #undef SPDK_CONFIG_CET 00:11:49.078 #define SPDK_CONFIG_COVERAGE 1 00:11:49.078 #define SPDK_CONFIG_CROSS_PREFIX 00:11:49.078 #undef SPDK_CONFIG_CRYPTO 00:11:49.078 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:49.078 #undef SPDK_CONFIG_CUSTOMOCF 00:11:49.078 #undef SPDK_CONFIG_DAOS 00:11:49.078 #define SPDK_CONFIG_DAOS_DIR 00:11:49.078 #define SPDK_CONFIG_DEBUG 1 00:11:49.078 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:49.078 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:49.078 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:49.078 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:49.078 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:49.078 #undef SPDK_CONFIG_DPDK_UADK 00:11:49.078 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:49.078 #define SPDK_CONFIG_EXAMPLES 1 00:11:49.078 #undef SPDK_CONFIG_FC 00:11:49.078 #define SPDK_CONFIG_FC_PATH 00:11:49.078 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:49.078 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:49.078 #undef SPDK_CONFIG_FUSE 00:11:49.078 #undef SPDK_CONFIG_FUZZER 00:11:49.078 #define SPDK_CONFIG_FUZZER_LIB 00:11:49.078 #undef SPDK_CONFIG_GOLANG 00:11:49.078 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:49.078 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:49.078 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:49.078 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:49.078 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:49.078 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:49.078 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:49.078 #define SPDK_CONFIG_IDXD 1 00:11:49.078 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:49.078 #undef SPDK_CONFIG_IPSEC_MB 00:11:49.078 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:49.078 #define SPDK_CONFIG_ISAL 1 00:11:49.078 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:49.078 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:49.078 #define SPDK_CONFIG_LIBDIR 00:11:49.078 #undef SPDK_CONFIG_LTO 00:11:49.078 #define SPDK_CONFIG_MAX_LCORES 128 00:11:49.078 #define SPDK_CONFIG_NVME_CUSE 1 00:11:49.078 #undef SPDK_CONFIG_OCF 00:11:49.078 #define SPDK_CONFIG_OCF_PATH 00:11:49.078 #define SPDK_CONFIG_OPENSSL_PATH 00:11:49.078 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:49.078 #define SPDK_CONFIG_PGO_DIR 00:11:49.078 #undef SPDK_CONFIG_PGO_USE 00:11:49.078 #define SPDK_CONFIG_PREFIX /usr/local 00:11:49.078 #undef SPDK_CONFIG_RAID5F 00:11:49.078 #undef SPDK_CONFIG_RBD 00:11:49.078 #define SPDK_CONFIG_RDMA 1 00:11:49.078 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:49.078 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:49.078 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:49.078 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:49.078 #define SPDK_CONFIG_SHARED 1 00:11:49.078 #undef SPDK_CONFIG_SMA 00:11:49.078 #define SPDK_CONFIG_TESTS 1 00:11:49.078 #undef SPDK_CONFIG_TSAN 00:11:49.078 #define SPDK_CONFIG_UBLK 1 00:11:49.078 #define SPDK_CONFIG_UBSAN 1 00:11:49.078 #undef SPDK_CONFIG_UNIT_TESTS 00:11:49.078 #undef SPDK_CONFIG_URING 00:11:49.078 #define SPDK_CONFIG_URING_PATH 00:11:49.078 #undef SPDK_CONFIG_URING_ZNS 00:11:49.078 #undef SPDK_CONFIG_USDT 00:11:49.078 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:49.078 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:49.078 #define SPDK_CONFIG_VFIO_USER 1 00:11:49.078 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:49.078 #define SPDK_CONFIG_VHOST 1 00:11:49.078 #define SPDK_CONFIG_VIRTIO 1 00:11:49.078 #undef SPDK_CONFIG_VTUNE 00:11:49.078 #define SPDK_CONFIG_VTUNE_DIR 00:11:49.078 #define SPDK_CONFIG_WERROR 1 00:11:49.078 #define SPDK_CONFIG_WPDK_DIR 00:11:49.078 #undef SPDK_CONFIG_XNVME 00:11:49.078 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:49.078 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:11:49.079 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:49.080 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j144 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 3572926 ]] 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 3572926 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.VrpXIB 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.VrpXIB/tests/target /tmp/spdk.VrpXIB 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=954236928 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4330192896 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=118592421888 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=129370976256 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=10778554368 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=64623304704 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=64685486080 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=62181376 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=25850851328 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=25874198528 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=23347200 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=efivarfs 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=efivarfs 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=216064 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=507904 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=287744 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=64684015616 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=64685490176 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=1474560 00:11:49.081 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:49.082 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:49.082 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:49.082 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=12937093120 00:11:49.082 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=12937097216 00:11:49.082 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:11:49.082 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:49.082 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:11:49.082 * Looking for test storage... 00:11:49.082 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:11:49.082 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:11:49.082 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:49.082 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:49.082 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:11:49.082 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=118592421888 00:11:49.082 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:11:49.082 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:11:49.082 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:11:49.082 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:11:49.082 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:11:49.082 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=12993146880 00:11:49.082 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:49.082 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:49.082 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:49.082 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:49.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:49.082 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:11:49.082 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:11:49.082 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:11:49.082 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:49.082 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:49.082 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:11:49.082 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:11:49.082 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:49.082 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:49.082 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:49.082 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:49.082 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:49.082 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:49.082 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:49.082 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:49.082 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:49.082 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:49.082 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:49.082 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:49.082 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:49.082 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:49.082 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:49.082 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:49.082 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:49.082 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:49.082 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:49.082 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:49.345 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:49.345 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:49.345 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:49.345 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:49.345 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:49.345 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:49.345 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:49.345 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:49.345 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:49.345 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:49.345 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.345 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.345 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.345 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:49.345 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.345 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:11:49.345 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:49.345 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:49.345 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:49.345 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:49.345 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:49.345 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:49.345 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:49.345 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:49.345 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:49.345 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:49.345 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:49.345 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:49.345 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:49.345 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:49.345 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:49.345 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:49.345 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.345 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:49.345 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.345 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:49.345 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:49.345 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:49.345 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:57.493 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:57.493 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:57.493 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:57.494 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:57.494 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:57.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:57.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:11:57.494 00:11:57.494 --- 10.0.0.2 ping statistics --- 00:11:57.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.494 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:57.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:57.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.373 ms 00:11:57.494 00:11:57.494 --- 10.0.0.1 ping statistics --- 00:11:57.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.494 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:57.494 ************************************ 00:11:57.494 START TEST nvmf_filesystem_no_in_capsule 00:11:57.494 ************************************ 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3576661 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3576661 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 3576661 ']' 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:57.494 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.494 [2024-07-24 19:51:44.608372] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:11:57.494 [2024-07-24 19:51:44.608432] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:57.494 EAL: No free 2048 kB hugepages reported on node 1 00:11:57.494 [2024-07-24 19:51:44.681931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:57.494 [2024-07-24 19:51:44.756355] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:57.494 [2024-07-24 19:51:44.756399] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:57.494 [2024-07-24 19:51:44.756407] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:57.494 [2024-07-24 19:51:44.756413] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:57.494 [2024-07-24 19:51:44.756419] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:57.495 [2024-07-24 19:51:44.756569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.495 [2024-07-24 19:51:44.756692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:57.495 [2024-07-24 19:51:44.756851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.495 [2024-07-24 19:51:44.756852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:57.495 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:57.495 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:57.495 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:57.495 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:57.495 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.495 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:57.495 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:57.495 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:57.495 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.495 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.495 [2024-07-24 19:51:45.438210] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:57.756 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.756 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:57.756 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.756 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.756 Malloc1 00:11:57.756 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.756 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:57.756 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.756 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.756 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.756 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:57.756 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.756 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.756 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.756 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:57.756 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.756 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.756 [2024-07-24 19:51:45.565410] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:57.756 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.756 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:57.756 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:57.756 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:57.756 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:57.756 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:57.756 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:57.756 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.756 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.756 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.756 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:57.756 { 00:11:57.756 "name": "Malloc1", 00:11:57.756 "aliases": [ 00:11:57.756 "af6cf192-f098-44a6-a556-75502cb1926e" 00:11:57.756 ], 00:11:57.756 "product_name": "Malloc disk", 00:11:57.756 "block_size": 512, 00:11:57.756 "num_blocks": 1048576, 00:11:57.756 "uuid": "af6cf192-f098-44a6-a556-75502cb1926e", 00:11:57.756 "assigned_rate_limits": { 00:11:57.756 "rw_ios_per_sec": 0, 00:11:57.756 "rw_mbytes_per_sec": 0, 00:11:57.756 "r_mbytes_per_sec": 0, 00:11:57.756 "w_mbytes_per_sec": 0 00:11:57.756 }, 00:11:57.756 "claimed": true, 00:11:57.756 "claim_type": "exclusive_write", 00:11:57.756 "zoned": false, 00:11:57.756 "supported_io_types": { 00:11:57.756 "read": true, 00:11:57.756 "write": true, 00:11:57.756 "unmap": true, 00:11:57.756 "flush": true, 00:11:57.756 "reset": true, 00:11:57.756 "nvme_admin": false, 00:11:57.756 "nvme_io": false, 00:11:57.756 "nvme_io_md": false, 00:11:57.756 "write_zeroes": true, 00:11:57.756 "zcopy": true, 00:11:57.756 "get_zone_info": false, 00:11:57.756 "zone_management": false, 00:11:57.756 "zone_append": false, 00:11:57.756 "compare": false, 00:11:57.756 "compare_and_write": false, 00:11:57.756 "abort": true, 00:11:57.756 "seek_hole": false, 00:11:57.756 "seek_data": false, 00:11:57.756 "copy": true, 00:11:57.756 "nvme_iov_md": false 00:11:57.756 }, 00:11:57.756 "memory_domains": [ 00:11:57.756 { 00:11:57.756 "dma_device_id": "system", 00:11:57.756 "dma_device_type": 1 00:11:57.756 }, 00:11:57.756 { 00:11:57.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.756 "dma_device_type": 2 00:11:57.756 } 00:11:57.756 ], 00:11:57.756 "driver_specific": {} 00:11:57.756 } 00:11:57.756 ]' 00:11:57.756 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:57.756 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:57.756 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:57.756 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:57.756 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:57.756 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:57.756 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:57.756 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:59.669 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:59.669 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:59.669 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:59.669 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:59.669 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:01.583 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:01.583 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:01.583 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:01.583 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:01.583 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:01.583 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:01.583 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:01.584 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:01.584 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:01.584 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:01.584 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:01.584 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:01.584 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:01.584 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:01.584 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:01.584 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:01.584 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:01.584 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:01.844 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:02.789 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:02.789 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:02.789 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:02.789 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:02.789 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.789 ************************************ 00:12:02.789 START TEST filesystem_ext4 00:12:02.789 ************************************ 00:12:02.789 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:02.789 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:02.789 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:02.789 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:02.789 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:02.789 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:02.789 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:02.789 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:02.789 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:02.789 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:02.789 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:02.789 mke2fs 1.46.5 (30-Dec-2021) 00:12:02.789 Discarding device blocks: 0/522240 done 00:12:02.789 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:02.789 Filesystem UUID: 092abcc9-c914-4028-a067-fc2f237648ae 00:12:02.789 Superblock backups stored on blocks: 00:12:02.789 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:02.789 00:12:02.789 Allocating group tables: 0/64 done 00:12:02.789 Writing inode tables: 0/64 done 00:12:03.056 Creating journal (8192 blocks): done 00:12:03.056 Writing superblocks and filesystem accounting information: 0/64 done 00:12:03.056 00:12:03.056 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:03.056 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:03.317 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:03.317 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:03.317 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:03.317 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:03.317 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:03.317 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:03.317 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3576661 00:12:03.317 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:03.317 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:03.317 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:03.317 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:03.317 00:12:03.317 real 0m0.538s 00:12:03.317 user 0m0.025s 00:12:03.317 sys 0m0.071s 00:12:03.317 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:03.317 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:03.317 ************************************ 00:12:03.317 END TEST filesystem_ext4 00:12:03.317 ************************************ 00:12:03.317 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:03.317 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:03.317 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:03.317 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:03.578 ************************************ 00:12:03.578 START TEST filesystem_btrfs 00:12:03.578 ************************************ 00:12:03.578 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:03.578 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:03.578 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:03.578 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:03.578 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:03.578 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:03.578 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:03.578 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:03.578 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:03.578 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:03.578 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:03.839 btrfs-progs v6.6.2 00:12:03.839 See https://btrfs.readthedocs.io for more information. 00:12:03.839 00:12:03.839 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:03.839 NOTE: several default settings have changed in version 5.15, please make sure 00:12:03.839 this does not affect your deployments: 00:12:03.839 - DUP for metadata (-m dup) 00:12:03.839 - enabled no-holes (-O no-holes) 00:12:03.839 - enabled free-space-tree (-R free-space-tree) 00:12:03.839 00:12:03.839 Label: (null) 00:12:03.839 UUID: 1d57ecfd-8cbd-4b27-b25a-367a55fe4642 00:12:03.839 Node size: 16384 00:12:03.839 Sector size: 4096 00:12:03.839 Filesystem size: 510.00MiB 00:12:03.839 Block group profiles: 00:12:03.839 Data: single 8.00MiB 00:12:03.839 Metadata: DUP 32.00MiB 00:12:03.839 System: DUP 8.00MiB 00:12:03.839 SSD detected: yes 00:12:03.839 Zoned device: no 00:12:03.839 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:12:03.839 Runtime features: free-space-tree 00:12:03.839 Checksum: crc32c 00:12:03.839 Number of devices: 1 00:12:03.839 Devices: 00:12:03.839 ID SIZE PATH 00:12:03.839 1 510.00MiB /dev/nvme0n1p1 00:12:03.839 00:12:03.839 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:03.839 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:04.782 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:04.782 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:04.782 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:04.782 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:04.782 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:04.782 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:04.782 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3576661 00:12:04.782 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:04.782 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:05.047 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:05.047 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:05.047 00:12:05.047 real 0m1.451s 00:12:05.047 user 0m0.033s 00:12:05.047 sys 0m0.130s 00:12:05.047 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:05.047 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:05.047 ************************************ 00:12:05.047 END TEST filesystem_btrfs 00:12:05.047 ************************************ 00:12:05.047 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:05.047 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:05.047 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:05.047 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:05.047 ************************************ 00:12:05.047 START TEST filesystem_xfs 00:12:05.047 ************************************ 00:12:05.047 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:05.047 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:05.047 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:05.047 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:05.047 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:05.047 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:05.047 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:05.047 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:12:05.047 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:05.047 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:05.047 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:05.047 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:05.047 = sectsz=512 attr=2, projid32bit=1 00:12:05.047 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:05.047 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:05.048 data = bsize=4096 blocks=130560, imaxpct=25 00:12:05.048 = sunit=0 swidth=0 blks 00:12:05.048 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:05.048 log =internal log bsize=4096 blocks=16384, version=2 00:12:05.048 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:05.048 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:06.024 Discarding blocks...Done. 00:12:06.024 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:06.024 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:07.937 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:07.937 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:07.937 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:07.937 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:07.937 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:07.937 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:07.937 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3576661 00:12:07.937 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:07.937 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:07.937 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:07.937 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:07.937 00:12:07.937 real 0m2.923s 00:12:07.937 user 0m0.022s 00:12:07.937 sys 0m0.080s 00:12:07.937 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:07.937 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:07.937 ************************************ 00:12:07.937 END TEST filesystem_xfs 00:12:07.937 ************************************ 00:12:07.937 19:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:08.509 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:08.509 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:08.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.509 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:08.509 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:08.509 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:08.509 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:08.509 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:08.509 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:08.509 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:08.509 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:08.509 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.509 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.509 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.509 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:08.509 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3576661 00:12:08.509 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 3576661 ']' 00:12:08.509 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 3576661 00:12:08.509 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:08.509 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:08.509 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3576661 00:12:08.509 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:08.509 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:08.509 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3576661' 00:12:08.509 killing process with pid 3576661 00:12:08.509 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 3576661 00:12:08.509 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 3576661 00:12:08.770 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:08.770 00:12:08.770 real 0m12.044s 00:12:08.770 user 0m47.379s 00:12:08.770 sys 0m1.225s 00:12:08.770 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:08.770 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.770 ************************************ 00:12:08.770 END TEST nvmf_filesystem_no_in_capsule 00:12:08.770 ************************************ 00:12:08.770 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:08.770 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:08.770 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:08.770 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:08.770 ************************************ 00:12:08.770 START TEST nvmf_filesystem_in_capsule 00:12:08.770 ************************************ 00:12:08.770 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:12:08.770 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:08.770 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:08.770 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:08.770 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:08.770 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.770 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3579263 00:12:08.770 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3579263 00:12:08.770 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:08.770 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 3579263 ']' 00:12:08.770 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.770 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:08.770 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.770 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:08.770 19:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:09.032 [2024-07-24 19:51:56.728670] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:12:09.032 [2024-07-24 19:51:56.728725] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.032 EAL: No free 2048 kB hugepages reported on node 1 00:12:09.032 [2024-07-24 19:51:56.797645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:09.032 [2024-07-24 19:51:56.874085] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:09.032 [2024-07-24 19:51:56.874121] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:09.032 [2024-07-24 19:51:56.874129] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:09.032 [2024-07-24 19:51:56.874135] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:09.032 [2024-07-24 19:51:56.874141] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:09.032 [2024-07-24 19:51:56.874235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.032 [2024-07-24 19:51:56.874468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.032 [2024-07-24 19:51:56.874469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:09.032 [2024-07-24 19:51:56.874310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:09.604 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:09.604 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:12:09.604 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:09.604 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:09.604 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:09.604 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:09.604 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:09.604 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:09.604 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.604 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:09.604 [2024-07-24 19:51:57.556157] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:09.865 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.865 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:09.865 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.865 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:09.865 Malloc1 00:12:09.865 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.865 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:09.865 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.865 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:09.865 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.865 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:09.865 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.865 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:09.865 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.865 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:09.865 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.865 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:09.865 [2024-07-24 19:51:57.681308] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:09.865 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.865 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:09.865 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:12:09.865 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:12:09.865 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:12:09.865 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:12:09.865 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:09.865 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.865 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:09.865 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.865 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:12:09.865 { 00:12:09.865 "name": "Malloc1", 00:12:09.865 "aliases": [ 00:12:09.865 "7609800a-cbb4-4bb6-88df-ded6b6888a0f" 00:12:09.865 ], 00:12:09.865 "product_name": "Malloc disk", 00:12:09.865 "block_size": 512, 00:12:09.865 "num_blocks": 1048576, 00:12:09.865 "uuid": "7609800a-cbb4-4bb6-88df-ded6b6888a0f", 00:12:09.865 "assigned_rate_limits": { 00:12:09.865 "rw_ios_per_sec": 0, 00:12:09.865 "rw_mbytes_per_sec": 0, 00:12:09.865 "r_mbytes_per_sec": 0, 00:12:09.865 "w_mbytes_per_sec": 0 00:12:09.865 }, 00:12:09.865 "claimed": true, 00:12:09.865 "claim_type": "exclusive_write", 00:12:09.865 "zoned": false, 00:12:09.866 "supported_io_types": { 00:12:09.866 "read": true, 00:12:09.866 "write": true, 00:12:09.866 "unmap": true, 00:12:09.866 "flush": true, 00:12:09.866 "reset": true, 00:12:09.866 "nvme_admin": false, 00:12:09.866 "nvme_io": false, 00:12:09.866 "nvme_io_md": false, 00:12:09.866 "write_zeroes": true, 00:12:09.866 "zcopy": true, 00:12:09.866 "get_zone_info": false, 00:12:09.866 "zone_management": false, 00:12:09.866 "zone_append": false, 00:12:09.866 "compare": false, 00:12:09.866 "compare_and_write": false, 00:12:09.866 "abort": true, 00:12:09.866 "seek_hole": false, 00:12:09.866 "seek_data": false, 00:12:09.866 "copy": true, 00:12:09.866 "nvme_iov_md": false 00:12:09.866 }, 00:12:09.866 "memory_domains": [ 00:12:09.866 { 00:12:09.866 "dma_device_id": "system", 00:12:09.866 "dma_device_type": 1 00:12:09.866 }, 00:12:09.866 { 00:12:09.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.866 "dma_device_type": 2 00:12:09.866 } 00:12:09.866 ], 00:12:09.866 "driver_specific": {} 00:12:09.866 } 00:12:09.866 ]' 00:12:09.866 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:12:09.866 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:12:09.866 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:12:09.866 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:12:09.866 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:12:09.866 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:12:09.866 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:09.866 19:51:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:11.838 19:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:11.838 19:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:12:11.838 19:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:11.838 19:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:11.838 19:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:13.753 19:52:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:13.753 19:52:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:13.753 19:52:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:13.753 19:52:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:13.753 19:52:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:13.753 19:52:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:13.753 19:52:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:13.753 19:52:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:13.753 19:52:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:13.753 19:52:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:13.753 19:52:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:13.753 19:52:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:13.753 19:52:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:13.753 19:52:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:13.753 19:52:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:13.753 19:52:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:13.753 19:52:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:13.753 19:52:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:14.325 19:52:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:15.267 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:15.267 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:15.267 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:15.267 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:15.267 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:15.267 ************************************ 00:12:15.267 START TEST filesystem_in_capsule_ext4 00:12:15.267 ************************************ 00:12:15.267 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:15.267 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:15.267 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:15.267 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:15.267 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:15.267 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:15.267 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:15.267 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:15.267 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:15.267 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:15.267 19:52:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:15.267 mke2fs 1.46.5 (30-Dec-2021) 00:12:15.527 Discarding device blocks: 0/522240 done 00:12:15.527 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:15.527 Filesystem UUID: f47798a2-6097-4f2c-8147-0b220fa6aee5 00:12:15.527 Superblock backups stored on blocks: 00:12:15.527 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:15.527 00:12:15.527 Allocating group tables: 0/64 done 00:12:15.527 Writing inode tables: 0/64 done 00:12:17.439 Creating journal (8192 blocks): done 00:12:17.699 Writing superblocks and filesystem accounting information: 0/64 done 00:12:17.699 00:12:17.699 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:17.699 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:17.959 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:17.959 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:17.959 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:17.959 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:17.959 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:17.959 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:17.959 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3579263 00:12:17.959 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:17.959 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:17.959 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:17.959 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:17.959 00:12:17.959 real 0m2.578s 00:12:17.959 user 0m0.026s 00:12:17.959 sys 0m0.070s 00:12:17.959 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:17.959 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:17.959 ************************************ 00:12:17.959 END TEST filesystem_in_capsule_ext4 00:12:17.959 ************************************ 00:12:17.959 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:17.959 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:17.959 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:17.959 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.959 ************************************ 00:12:17.959 START TEST filesystem_in_capsule_btrfs 00:12:17.959 ************************************ 00:12:17.959 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:17.959 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:17.959 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:17.959 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:17.959 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:17.959 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:17.959 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:17.959 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:17.959 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:17.959 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:17.959 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:18.219 btrfs-progs v6.6.2 00:12:18.219 See https://btrfs.readthedocs.io for more information. 00:12:18.219 00:12:18.219 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:18.219 NOTE: several default settings have changed in version 5.15, please make sure 00:12:18.219 this does not affect your deployments: 00:12:18.219 - DUP for metadata (-m dup) 00:12:18.219 - enabled no-holes (-O no-holes) 00:12:18.219 - enabled free-space-tree (-R free-space-tree) 00:12:18.219 00:12:18.219 Label: (null) 00:12:18.219 UUID: ac21839f-8349-4449-a196-fab96a30c9b7 00:12:18.219 Node size: 16384 00:12:18.219 Sector size: 4096 00:12:18.219 Filesystem size: 510.00MiB 00:12:18.219 Block group profiles: 00:12:18.219 Data: single 8.00MiB 00:12:18.219 Metadata: DUP 32.00MiB 00:12:18.219 System: DUP 8.00MiB 00:12:18.219 SSD detected: yes 00:12:18.219 Zoned device: no 00:12:18.219 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:12:18.219 Runtime features: free-space-tree 00:12:18.219 Checksum: crc32c 00:12:18.219 Number of devices: 1 00:12:18.219 Devices: 00:12:18.219 ID SIZE PATH 00:12:18.219 1 510.00MiB /dev/nvme0n1p1 00:12:18.219 00:12:18.219 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:18.219 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:18.479 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:18.479 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:18.479 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:18.479 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:18.479 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:18.479 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:18.741 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3579263 00:12:18.741 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:18.741 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:18.741 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:18.741 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:18.741 00:12:18.741 real 0m0.613s 00:12:18.741 user 0m0.028s 00:12:18.741 sys 0m0.134s 00:12:18.741 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:18.741 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:18.741 ************************************ 00:12:18.741 END TEST filesystem_in_capsule_btrfs 00:12:18.741 ************************************ 00:12:18.741 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:18.741 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:18.741 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:18.741 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.741 ************************************ 00:12:18.741 START TEST filesystem_in_capsule_xfs 00:12:18.741 ************************************ 00:12:18.741 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:18.741 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:18.741 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:18.741 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:18.741 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:18.741 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:18.741 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:18.741 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:12:18.741 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:18.741 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:18.741 19:52:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:18.741 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:18.741 = sectsz=512 attr=2, projid32bit=1 00:12:18.741 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:18.741 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:18.741 data = bsize=4096 blocks=130560, imaxpct=25 00:12:18.741 = sunit=0 swidth=0 blks 00:12:18.741 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:18.741 log =internal log bsize=4096 blocks=16384, version=2 00:12:18.741 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:18.741 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:20.126 Discarding blocks...Done. 00:12:20.126 19:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:20.126 19:52:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:21.512 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:21.773 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:21.773 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:21.773 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:21.773 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:21.773 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:21.773 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3579263 00:12:21.773 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:21.773 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:21.773 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:21.773 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:21.773 00:12:21.773 real 0m3.017s 00:12:21.773 user 0m0.033s 00:12:21.773 sys 0m0.072s 00:12:21.773 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:21.773 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:21.773 ************************************ 00:12:21.773 END TEST filesystem_in_capsule_xfs 00:12:21.773 ************************************ 00:12:21.773 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:22.034 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:22.034 19:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:22.295 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.295 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:22.295 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:22.295 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:22.295 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:22.295 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:22.295 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:22.295 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:22.295 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:22.295 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.295 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:22.295 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.295 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:22.295 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3579263 00:12:22.295 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 3579263 ']' 00:12:22.295 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 3579263 00:12:22.295 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:22.295 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:22.295 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3579263 00:12:22.295 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:22.295 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:22.295 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3579263' 00:12:22.295 killing process with pid 3579263 00:12:22.295 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 3579263 00:12:22.295 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 3579263 00:12:22.555 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:22.555 00:12:22.555 real 0m13.744s 00:12:22.556 user 0m54.198s 00:12:22.556 sys 0m1.210s 00:12:22.556 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:22.556 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:22.556 ************************************ 00:12:22.556 END TEST nvmf_filesystem_in_capsule 00:12:22.556 ************************************ 00:12:22.556 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:22.556 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:22.556 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:12:22.556 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:22.556 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:12:22.556 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:22.556 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:22.556 rmmod nvme_tcp 00:12:22.556 rmmod nvme_fabrics 00:12:22.556 rmmod nvme_keyring 00:12:22.816 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:22.816 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:12:22.816 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:12:22.816 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:22.816 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:22.816 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:22.816 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:22.816 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:22.816 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:22.816 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.816 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:22.816 19:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.735 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:24.735 00:12:24.735 real 0m35.845s 00:12:24.735 user 1m43.830s 00:12:24.735 sys 0m8.152s 00:12:24.735 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:24.735 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:24.735 ************************************ 00:12:24.735 END TEST nvmf_filesystem 00:12:24.735 ************************************ 00:12:24.735 19:52:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:24.735 19:52:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:24.735 19:52:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:24.735 19:52:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:24.735 ************************************ 00:12:24.735 START TEST nvmf_target_discovery 00:12:24.735 ************************************ 00:12:24.735 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:24.996 * Looking for test storage... 00:12:24.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:24.996 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:24.996 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:24.996 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:24.996 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:24.996 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:24.996 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:24.996 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:24.996 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:24.996 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:24.996 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:24.996 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:24.996 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:24.996 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:24.996 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:24.996 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:24.996 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:24.996 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:24.996 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:24.996 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:24.996 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:24.996 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:24.996 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:24.996 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.996 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.996 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.997 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:24.997 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.997 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:12:24.997 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:24.997 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:24.997 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:24.997 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:24.997 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:24.997 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:24.997 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:24.997 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:24.997 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:24.997 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:24.997 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:24.997 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:24.997 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:24.997 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:24.997 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:24.997 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:24.997 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:24.997 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:24.997 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.997 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:24.997 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.997 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:24.997 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:24.997 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:12:24.997 19:52:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.636 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:31.636 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:12:31.636 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:31.636 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:31.636 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:31.636 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:31.636 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:31.636 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:12:31.636 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:31.636 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:12:31.636 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:12:31.636 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:31.637 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:31.637 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:31.637 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:31.637 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:31.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:31.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:12:31.637 00:12:31.637 --- 10.0.0.2 ping statistics --- 00:12:31.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.637 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:31.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:31.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.414 ms 00:12:31.637 00:12:31.637 --- 10.0.0.1 ping statistics --- 00:12:31.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.637 rtt min/avg/max/mdev = 0.414/0.414/0.414/0.000 ms 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:31.637 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:31.638 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.638 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3586231 00:12:31.638 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3586231 00:12:31.638 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 3586231 ']' 00:12:31.638 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.638 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:31.638 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.638 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:31.638 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.638 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:31.638 [2024-07-24 19:52:19.406136] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:12:31.638 [2024-07-24 19:52:19.406213] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.638 EAL: No free 2048 kB hugepages reported on node 1 00:12:31.638 [2024-07-24 19:52:19.477561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:31.638 [2024-07-24 19:52:19.553174] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:31.638 [2024-07-24 19:52:19.553219] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:31.638 [2024-07-24 19:52:19.553227] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:31.638 [2024-07-24 19:52:19.553234] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:31.638 [2024-07-24 19:52:19.553239] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:31.638 [2024-07-24 19:52:19.553322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.638 [2024-07-24 19:52:19.553426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:31.638 [2024-07-24 19:52:19.553582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.638 [2024-07-24 19:52:19.553583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:32.280 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:32.280 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:12:32.280 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:32.280 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:32.280 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.280 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:32.280 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:32.280 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.280 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.541 [2024-07-24 19:52:20.235219] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:32.541 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.541 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.542 Null1 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.542 [2024-07-24 19:52:20.295531] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.542 Null2 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.542 Null3 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.542 Null4 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:32.542 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.543 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.543 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.543 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:32.543 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.543 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.543 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.543 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:32.543 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.543 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.543 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.543 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:32.543 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.543 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.543 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.543 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:12:32.804 00:12:32.804 Discovery Log Number of Records 6, Generation counter 6 00:12:32.804 =====Discovery Log Entry 0====== 00:12:32.804 trtype: tcp 00:12:32.804 adrfam: ipv4 00:12:32.804 subtype: current discovery subsystem 00:12:32.804 treq: not required 00:12:32.804 portid: 0 00:12:32.804 trsvcid: 4420 00:12:32.804 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:32.804 traddr: 10.0.0.2 00:12:32.804 eflags: explicit discovery connections, duplicate discovery information 00:12:32.804 sectype: none 00:12:32.804 =====Discovery Log Entry 1====== 00:12:32.804 trtype: tcp 00:12:32.804 adrfam: ipv4 00:12:32.804 subtype: nvme subsystem 00:12:32.804 treq: not required 00:12:32.804 portid: 0 00:12:32.804 trsvcid: 4420 00:12:32.804 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:32.804 traddr: 10.0.0.2 00:12:32.804 eflags: none 00:12:32.804 sectype: none 00:12:32.804 =====Discovery Log Entry 2====== 00:12:32.804 trtype: tcp 00:12:32.804 adrfam: ipv4 00:12:32.804 subtype: nvme subsystem 00:12:32.804 treq: not required 00:12:32.804 portid: 0 00:12:32.804 trsvcid: 4420 00:12:32.804 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:32.804 traddr: 10.0.0.2 00:12:32.804 eflags: none 00:12:32.804 sectype: none 00:12:32.804 =====Discovery Log Entry 3====== 00:12:32.804 trtype: tcp 00:12:32.804 adrfam: ipv4 00:12:32.804 subtype: nvme subsystem 00:12:32.804 treq: not required 00:12:32.804 portid: 0 00:12:32.804 trsvcid: 4420 00:12:32.804 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:32.804 traddr: 10.0.0.2 00:12:32.804 eflags: none 00:12:32.804 sectype: none 00:12:32.804 =====Discovery Log Entry 4====== 00:12:32.804 trtype: tcp 00:12:32.804 adrfam: ipv4 00:12:32.804 subtype: nvme subsystem 00:12:32.804 treq: not required 00:12:32.804 portid: 0 00:12:32.804 trsvcid: 4420 00:12:32.804 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:32.804 traddr: 10.0.0.2 00:12:32.804 eflags: none 00:12:32.804 sectype: none 00:12:32.804 =====Discovery Log Entry 5====== 00:12:32.804 trtype: tcp 00:12:32.804 adrfam: ipv4 00:12:32.804 subtype: discovery subsystem referral 00:12:32.804 treq: not required 00:12:32.804 portid: 0 00:12:32.804 trsvcid: 4430 00:12:32.804 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:32.804 traddr: 10.0.0.2 00:12:32.804 eflags: none 00:12:32.804 sectype: none 00:12:32.804 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:32.804 Perform nvmf subsystem discovery via RPC 00:12:32.804 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:32.804 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.804 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.804 [ 00:12:32.804 { 00:12:32.804 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:32.804 "subtype": "Discovery", 00:12:32.804 "listen_addresses": [ 00:12:32.804 { 00:12:32.804 "trtype": "TCP", 00:12:32.804 "adrfam": "IPv4", 00:12:32.804 "traddr": "10.0.0.2", 00:12:32.804 "trsvcid": "4420" 00:12:32.804 } 00:12:32.804 ], 00:12:32.804 "allow_any_host": true, 00:12:32.804 "hosts": [] 00:12:32.804 }, 00:12:32.804 { 00:12:32.804 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:32.804 "subtype": "NVMe", 00:12:32.804 "listen_addresses": [ 00:12:32.804 { 00:12:32.804 "trtype": "TCP", 00:12:32.804 "adrfam": "IPv4", 00:12:32.804 "traddr": "10.0.0.2", 00:12:32.804 "trsvcid": "4420" 00:12:32.804 } 00:12:32.804 ], 00:12:32.804 "allow_any_host": true, 00:12:32.804 "hosts": [], 00:12:32.804 "serial_number": "SPDK00000000000001", 00:12:32.804 "model_number": "SPDK bdev Controller", 00:12:32.804 "max_namespaces": 32, 00:12:32.804 "min_cntlid": 1, 00:12:32.804 "max_cntlid": 65519, 00:12:32.804 "namespaces": [ 00:12:32.804 { 00:12:32.804 "nsid": 1, 00:12:32.804 "bdev_name": "Null1", 00:12:32.804 "name": "Null1", 00:12:32.804 "nguid": "3F133EB8EDEB4406B39DAF116A385510", 00:12:32.804 "uuid": "3f133eb8-edeb-4406-b39d-af116a385510" 00:12:32.804 } 00:12:32.804 ] 00:12:32.804 }, 00:12:32.804 { 00:12:32.804 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:32.804 "subtype": "NVMe", 00:12:32.804 "listen_addresses": [ 00:12:32.804 { 00:12:32.804 "trtype": "TCP", 00:12:32.804 "adrfam": "IPv4", 00:12:32.804 "traddr": "10.0.0.2", 00:12:32.804 "trsvcid": "4420" 00:12:32.804 } 00:12:32.804 ], 00:12:32.804 "allow_any_host": true, 00:12:32.804 "hosts": [], 00:12:32.804 "serial_number": "SPDK00000000000002", 00:12:32.804 "model_number": "SPDK bdev Controller", 00:12:32.804 "max_namespaces": 32, 00:12:32.804 "min_cntlid": 1, 00:12:32.804 "max_cntlid": 65519, 00:12:32.804 "namespaces": [ 00:12:32.804 { 00:12:32.804 "nsid": 1, 00:12:32.804 "bdev_name": "Null2", 00:12:32.804 "name": "Null2", 00:12:32.804 "nguid": "622A036CDB594A8D9D361007482310C9", 00:12:32.804 "uuid": "622a036c-db59-4a8d-9d36-1007482310c9" 00:12:32.804 } 00:12:32.804 ] 00:12:32.804 }, 00:12:32.804 { 00:12:32.804 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:32.804 "subtype": "NVMe", 00:12:32.804 "listen_addresses": [ 00:12:32.804 { 00:12:32.804 "trtype": "TCP", 00:12:32.804 "adrfam": "IPv4", 00:12:32.804 "traddr": "10.0.0.2", 00:12:32.804 "trsvcid": "4420" 00:12:32.804 } 00:12:32.804 ], 00:12:32.804 "allow_any_host": true, 00:12:32.804 "hosts": [], 00:12:32.804 "serial_number": "SPDK00000000000003", 00:12:32.804 "model_number": "SPDK bdev Controller", 00:12:32.804 "max_namespaces": 32, 00:12:32.804 "min_cntlid": 1, 00:12:32.804 "max_cntlid": 65519, 00:12:32.804 "namespaces": [ 00:12:32.804 { 00:12:32.804 "nsid": 1, 00:12:32.804 "bdev_name": "Null3", 00:12:32.804 "name": "Null3", 00:12:32.804 "nguid": "FA59DB2447F84A69865942723CBCCF12", 00:12:32.804 "uuid": "fa59db24-47f8-4a69-8659-42723cbccf12" 00:12:32.804 } 00:12:32.804 ] 00:12:32.804 }, 00:12:32.804 { 00:12:32.804 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:32.804 "subtype": "NVMe", 00:12:32.804 "listen_addresses": [ 00:12:32.804 { 00:12:32.804 "trtype": "TCP", 00:12:32.804 "adrfam": "IPv4", 00:12:32.804 "traddr": "10.0.0.2", 00:12:32.804 "trsvcid": "4420" 00:12:32.804 } 00:12:32.804 ], 00:12:32.804 "allow_any_host": true, 00:12:32.804 "hosts": [], 00:12:32.804 "serial_number": "SPDK00000000000004", 00:12:32.804 "model_number": "SPDK bdev Controller", 00:12:32.804 "max_namespaces": 32, 00:12:32.804 "min_cntlid": 1, 00:12:32.804 "max_cntlid": 65519, 00:12:32.804 "namespaces": [ 00:12:32.804 { 00:12:32.804 "nsid": 1, 00:12:32.804 "bdev_name": "Null4", 00:12:32.804 "name": "Null4", 00:12:32.804 "nguid": "13756E1238FA45729D9E34187E0D9E32", 00:12:32.804 "uuid": "13756e12-38fa-4572-9d9e-34187e0d9e32" 00:12:32.804 } 00:12:32.804 ] 00:12:32.804 } 00:12:32.804 ] 00:12:32.804 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.804 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:32.804 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:32.804 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:32.804 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.805 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.066 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:33.066 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:33.066 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:33.066 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:33.066 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:33.066 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:12:33.066 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:33.066 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:12:33.066 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:33.066 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:33.066 rmmod nvme_tcp 00:12:33.066 rmmod nvme_fabrics 00:12:33.066 rmmod nvme_keyring 00:12:33.066 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:33.066 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:12:33.066 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:12:33.066 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3586231 ']' 00:12:33.066 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3586231 00:12:33.066 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 3586231 ']' 00:12:33.066 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 3586231 00:12:33.066 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:12:33.066 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:33.066 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3586231 00:12:33.066 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:33.066 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:33.066 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3586231' 00:12:33.066 killing process with pid 3586231 00:12:33.066 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 3586231 00:12:33.066 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 3586231 00:12:33.326 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:33.326 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:33.326 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:33.326 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:33.326 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:33.326 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.326 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:33.326 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.240 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:35.240 00:12:35.240 real 0m10.412s 00:12:35.240 user 0m7.805s 00:12:35.240 sys 0m5.272s 00:12:35.240 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:35.240 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:35.240 ************************************ 00:12:35.240 END TEST nvmf_target_discovery 00:12:35.240 ************************************ 00:12:35.240 19:52:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:35.240 19:52:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:35.240 19:52:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:35.240 19:52:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:35.240 ************************************ 00:12:35.240 START TEST nvmf_referrals 00:12:35.240 ************************************ 00:12:35.240 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:35.501 * Looking for test storage... 00:12:35.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:35.501 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:35.501 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:35.501 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:35.501 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:35.501 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:35.501 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:12:35.502 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:43.643 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:43.643 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:43.643 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:43.643 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:43.643 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:43.643 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:43.643 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:12:43.643 00:12:43.643 --- 10.0.0.2 ping statistics --- 00:12:43.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.644 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:12:43.644 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:43.644 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:43.644 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.402 ms 00:12:43.644 00:12:43.644 --- 10.0.0.1 ping statistics --- 00:12:43.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.644 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:12:43.644 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:43.644 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:12:43.644 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:43.644 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:43.644 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:43.644 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:43.644 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:43.644 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:43.644 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:43.644 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:43.644 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:43.644 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:43.644 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:43.644 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3590684 00:12:43.644 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3590684 00:12:43.644 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:43.644 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 3590684 ']' 00:12:43.644 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.644 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:43.644 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.644 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:43.644 19:52:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:43.644 [2024-07-24 19:52:30.535064] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:12:43.644 [2024-07-24 19:52:30.535121] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:43.644 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.644 [2024-07-24 19:52:30.605761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:43.644 [2024-07-24 19:52:30.678396] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:43.644 [2024-07-24 19:52:30.678436] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:43.644 [2024-07-24 19:52:30.678444] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:43.644 [2024-07-24 19:52:30.678450] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:43.644 [2024-07-24 19:52:30.678456] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:43.644 [2024-07-24 19:52:30.678600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:43.644 [2024-07-24 19:52:30.678723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:43.644 [2024-07-24 19:52:30.678883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.644 [2024-07-24 19:52:30.678884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:43.644 [2024-07-24 19:52:31.360178] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:43.644 [2024-07-24 19:52:31.376404] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:43.644 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:43.905 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:43.905 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:43.905 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:43.905 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.905 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:43.905 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.905 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:43.905 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.905 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:43.906 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.906 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:43.906 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.906 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:43.906 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.906 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:43.906 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:43.906 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.906 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:43.906 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.906 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:43.906 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:43.906 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:43.906 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:43.906 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:43.906 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:43.906 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:44.166 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:44.166 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:44.166 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:44.166 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.166 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.166 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.166 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:44.166 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.166 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.166 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.166 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:44.166 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:44.166 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:44.166 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:44.166 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.166 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:44.166 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.166 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.166 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:44.166 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:44.166 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:44.167 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:44.167 19:52:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:44.167 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:44.167 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:44.167 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:44.427 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:44.427 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:44.427 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:44.427 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:44.427 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:44.427 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:44.427 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:44.427 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:44.427 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:44.427 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:44.427 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:44.427 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:44.427 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:44.688 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:44.688 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:44.688 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.688 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.688 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.688 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:44.688 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:44.688 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:44.688 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:44.688 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.688 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:44.688 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.688 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.688 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:44.688 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:44.688 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:44.688 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:44.688 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:44.688 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:44.688 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:44.688 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:44.948 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:44.948 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:44.948 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:44.948 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:44.948 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:44.948 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:44.948 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:44.948 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:44.948 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:44.948 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:44.948 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:44.948 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:44.948 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:45.208 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:45.208 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:45.208 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.208 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:45.208 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.208 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:45.208 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:45.208 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.208 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:45.208 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.209 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:45.209 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:45.209 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:45.209 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:45.209 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:45.209 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:45.209 19:52:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:45.209 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:45.209 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:45.209 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:45.209 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:45.209 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:45.209 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:12:45.209 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:45.209 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:12:45.209 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:45.209 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:45.209 rmmod nvme_tcp 00:12:45.209 rmmod nvme_fabrics 00:12:45.470 rmmod nvme_keyring 00:12:45.470 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:45.470 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:12:45.470 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:12:45.470 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3590684 ']' 00:12:45.470 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3590684 00:12:45.470 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 3590684 ']' 00:12:45.470 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 3590684 00:12:45.470 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:12:45.470 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:45.470 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3590684 00:12:45.470 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:45.470 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:45.470 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3590684' 00:12:45.470 killing process with pid 3590684 00:12:45.470 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 3590684 00:12:45.470 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 3590684 00:12:45.470 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:45.470 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:45.470 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:45.470 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:45.470 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:45.470 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.470 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:45.470 19:52:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.017 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:48.017 00:12:48.017 real 0m12.289s 00:12:48.017 user 0m13.708s 00:12:48.017 sys 0m6.046s 00:12:48.017 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:48.017 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:48.017 ************************************ 00:12:48.017 END TEST nvmf_referrals 00:12:48.017 ************************************ 00:12:48.017 19:52:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:48.017 19:52:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:48.017 19:52:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:48.017 19:52:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:48.017 ************************************ 00:12:48.017 START TEST nvmf_connect_disconnect 00:12:48.017 ************************************ 00:12:48.017 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:48.017 * Looking for test storage... 00:12:48.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:48.017 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:48.017 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:48.017 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:48.017 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:48.017 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:48.017 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:48.017 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:48.017 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:48.017 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:48.017 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:48.017 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:48.017 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:48.017 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:48.017 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:48.017 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:48.017 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:48.017 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:48.017 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:48.017 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:48.017 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:48.017 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:48.017 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:48.017 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.017 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.018 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.018 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:48.018 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.018 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:12:48.018 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:48.018 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:48.018 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:48.018 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:48.018 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:48.018 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:48.018 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:48.018 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:48.018 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:48.018 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:48.018 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:48.018 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:48.018 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:48.018 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:48.018 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:48.018 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:48.018 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.018 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:48.018 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.018 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:48.018 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:48.018 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:12:48.018 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:56.159 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:56.159 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:56.159 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:56.159 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:56.159 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:56.160 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:56.160 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.768 ms 00:12:56.160 00:12:56.160 --- 10.0.0.2 ping statistics --- 00:12:56.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.160 rtt min/avg/max/mdev = 0.768/0.768/0.768/0.000 ms 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:56.160 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:56.160 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.390 ms 00:12:56.160 00:12:56.160 --- 10.0.0.1 ping statistics --- 00:12:56.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.160 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3595436 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3595436 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 3595436 ']' 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:56.160 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:56.160 [2024-07-24 19:52:43.007142] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:12:56.160 [2024-07-24 19:52:43.007229] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:56.160 EAL: No free 2048 kB hugepages reported on node 1 00:12:56.160 [2024-07-24 19:52:43.078193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:56.160 [2024-07-24 19:52:43.152099] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:56.160 [2024-07-24 19:52:43.152140] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:56.160 [2024-07-24 19:52:43.152147] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:56.160 [2024-07-24 19:52:43.152158] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:56.160 [2024-07-24 19:52:43.152164] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:56.160 [2024-07-24 19:52:43.152303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:56.160 [2024-07-24 19:52:43.152417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:56.160 [2024-07-24 19:52:43.152571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.160 [2024-07-24 19:52:43.152573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:56.160 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:56.160 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:56.160 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:56.160 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:56.160 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:56.160 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:56.160 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:56.160 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.160 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:56.160 [2024-07-24 19:52:43.831204] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:56.160 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.160 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:56.160 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.160 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:56.160 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.160 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:56.160 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:56.160 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.160 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:56.160 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.160 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:56.160 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.160 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:56.160 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.160 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:56.160 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.161 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:56.161 [2024-07-24 19:52:43.890563] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:56.161 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.161 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:56.161 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:56.161 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:00.431 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.230 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.532 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.532 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:14.532 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:14.532 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:14.532 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:13:14.532 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:14.532 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:13:14.532 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:14.532 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:14.532 rmmod nvme_tcp 00:13:14.532 rmmod nvme_fabrics 00:13:14.532 rmmod nvme_keyring 00:13:14.532 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:14.532 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:13:14.532 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:13:14.532 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3595436 ']' 00:13:14.532 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3595436 00:13:14.532 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 3595436 ']' 00:13:14.532 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 3595436 00:13:14.532 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:13:14.532 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:14.532 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3595436 00:13:14.532 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:14.532 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:14.532 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3595436' 00:13:14.532 killing process with pid 3595436 00:13:14.532 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 3595436 00:13:14.532 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 3595436 00:13:14.532 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:14.532 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:14.532 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:14.532 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:14.532 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:14.532 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.532 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:14.532 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.079 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:17.079 00:13:17.079 real 0m28.955s 00:13:17.079 user 1m18.841s 00:13:17.079 sys 0m6.631s 00:13:17.079 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:17.079 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:17.079 ************************************ 00:13:17.079 END TEST nvmf_connect_disconnect 00:13:17.079 ************************************ 00:13:17.079 19:53:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:17.079 19:53:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:17.079 19:53:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:17.080 ************************************ 00:13:17.080 START TEST nvmf_multitarget 00:13:17.080 ************************************ 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:17.080 * Looking for test storage... 00:13:17.080 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:13:17.080 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:23.673 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:23.673 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:13:23.673 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:23.673 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:23.673 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:23.673 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:23.673 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:23.673 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:13:23.673 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:23.673 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:13:23.673 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:13:23.673 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:13:23.673 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:13:23.673 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:13:23.673 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:13:23.673 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:23.673 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:23.673 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:23.673 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:23.673 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:23.673 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:23.673 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:23.673 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:23.673 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:23.674 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:23.674 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:23.674 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:23.674 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:23.674 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:23.935 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:23.935 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:23.935 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:23.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:23.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.773 ms 00:13:23.935 00:13:23.935 --- 10.0.0.2 ping statistics --- 00:13:23.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.936 rtt min/avg/max/mdev = 0.773/0.773/0.773/0.000 ms 00:13:23.936 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:23.936 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:23.936 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.395 ms 00:13:23.936 00:13:23.936 --- 10.0.0.1 ping statistics --- 00:13:23.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.936 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:13:23.936 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:23.936 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:13:23.936 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:23.936 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:23.936 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:23.936 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:23.936 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:23.936 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:23.936 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:23.936 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:23.936 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:23.936 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:23.936 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:23.936 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3603541 00:13:23.936 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3603541 00:13:23.936 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:23.936 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 3603541 ']' 00:13:23.936 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.936 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:23.936 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.936 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:23.936 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:23.936 [2024-07-24 19:53:11.824542] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:13:23.936 [2024-07-24 19:53:11.824607] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:23.936 EAL: No free 2048 kB hugepages reported on node 1 00:13:24.196 [2024-07-24 19:53:11.895353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:24.196 [2024-07-24 19:53:11.970385] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:24.196 [2024-07-24 19:53:11.970424] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:24.196 [2024-07-24 19:53:11.970432] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:24.196 [2024-07-24 19:53:11.970439] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:24.196 [2024-07-24 19:53:11.970444] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:24.196 [2024-07-24 19:53:11.970593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.196 [2024-07-24 19:53:11.970713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:24.196 [2024-07-24 19:53:11.970873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.196 [2024-07-24 19:53:11.970874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:24.767 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:24.767 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:13:24.767 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:24.767 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:24.767 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:24.767 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:24.767 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:24.767 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:24.767 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:25.027 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:25.027 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:25.027 "nvmf_tgt_1" 00:13:25.027 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:25.027 "nvmf_tgt_2" 00:13:25.027 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:25.027 19:53:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:25.289 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:25.289 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:25.289 true 00:13:25.289 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:25.289 true 00:13:25.549 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:25.549 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:25.549 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:25.549 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:25.549 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:25.549 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:25.549 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:13:25.549 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:25.549 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:13:25.549 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:25.549 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:25.549 rmmod nvme_tcp 00:13:25.549 rmmod nvme_fabrics 00:13:25.549 rmmod nvme_keyring 00:13:25.549 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:25.549 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:13:25.549 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:13:25.549 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3603541 ']' 00:13:25.549 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3603541 00:13:25.549 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 3603541 ']' 00:13:25.549 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 3603541 00:13:25.549 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:13:25.549 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:25.549 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3603541 00:13:25.549 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:25.550 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:25.550 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3603541' 00:13:25.550 killing process with pid 3603541 00:13:25.550 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 3603541 00:13:25.550 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 3603541 00:13:25.810 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:25.810 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:25.810 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:25.810 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:25.810 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:25.810 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.810 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:25.810 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.726 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:27.987 00:13:27.987 real 0m11.105s 00:13:27.987 user 0m9.149s 00:13:27.987 sys 0m5.793s 00:13:27.987 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:27.987 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:27.987 ************************************ 00:13:27.987 END TEST nvmf_multitarget 00:13:27.987 ************************************ 00:13:27.987 19:53:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:27.987 19:53:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:27.987 19:53:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:27.987 19:53:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:27.988 ************************************ 00:13:27.988 START TEST nvmf_rpc 00:13:27.988 ************************************ 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:27.988 * Looking for test storage... 00:13:27.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:13:27.988 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.171 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:36.171 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:13:36.171 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:36.171 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:36.171 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:36.171 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:36.171 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:36.171 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:13:36.171 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:36.171 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:13:36.171 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:13:36.171 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:13:36.171 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:13:36.171 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:13:36.171 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:13:36.171 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:36.171 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:36.171 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:36.171 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:36.171 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:36.171 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:36.171 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:36.171 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:36.171 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:36.171 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:36.171 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:36.171 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:36.171 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:36.171 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:36.171 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:36.171 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:36.171 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:36.171 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:36.171 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:36.171 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:36.172 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:36.172 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:36.172 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:36.172 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:36.172 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:13:36.172 00:13:36.172 --- 10.0.0.2 ping statistics --- 00:13:36.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.172 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:36.172 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:36.172 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.396 ms 00:13:36.172 00:13:36.172 --- 10.0.0.1 ping statistics --- 00:13:36.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.172 rtt min/avg/max/mdev = 0.396/0.396/0.396/0.000 ms 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:36.172 19:53:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.172 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3607907 00:13:36.172 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3607907 00:13:36.172 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 3607907 ']' 00:13:36.172 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:36.172 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.172 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:36.172 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.172 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:36.172 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.172 [2024-07-24 19:53:23.054644] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:13:36.172 [2024-07-24 19:53:23.054708] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:36.172 EAL: No free 2048 kB hugepages reported on node 1 00:13:36.172 [2024-07-24 19:53:23.124960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:36.172 [2024-07-24 19:53:23.199516] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:36.172 [2024-07-24 19:53:23.199555] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:36.172 [2024-07-24 19:53:23.199562] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:36.172 [2024-07-24 19:53:23.199569] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:36.172 [2024-07-24 19:53:23.199574] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:36.172 [2024-07-24 19:53:23.199712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:36.172 [2024-07-24 19:53:23.199831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:36.172 [2024-07-24 19:53:23.199991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.172 [2024-07-24 19:53:23.199992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:36.172 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:36.172 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:13:36.172 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:36.172 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:36.172 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.172 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:36.172 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:36.173 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.173 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.173 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.173 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:36.173 "tick_rate": 2400000000, 00:13:36.173 "poll_groups": [ 00:13:36.173 { 00:13:36.173 "name": "nvmf_tgt_poll_group_000", 00:13:36.173 "admin_qpairs": 0, 00:13:36.173 "io_qpairs": 0, 00:13:36.173 "current_admin_qpairs": 0, 00:13:36.173 "current_io_qpairs": 0, 00:13:36.173 "pending_bdev_io": 0, 00:13:36.173 "completed_nvme_io": 0, 00:13:36.173 "transports": [] 00:13:36.173 }, 00:13:36.173 { 00:13:36.173 "name": "nvmf_tgt_poll_group_001", 00:13:36.173 "admin_qpairs": 0, 00:13:36.173 "io_qpairs": 0, 00:13:36.173 "current_admin_qpairs": 0, 00:13:36.173 "current_io_qpairs": 0, 00:13:36.173 "pending_bdev_io": 0, 00:13:36.173 "completed_nvme_io": 0, 00:13:36.173 "transports": [] 00:13:36.173 }, 00:13:36.173 { 00:13:36.173 "name": "nvmf_tgt_poll_group_002", 00:13:36.173 "admin_qpairs": 0, 00:13:36.173 "io_qpairs": 0, 00:13:36.173 "current_admin_qpairs": 0, 00:13:36.173 "current_io_qpairs": 0, 00:13:36.173 "pending_bdev_io": 0, 00:13:36.173 "completed_nvme_io": 0, 00:13:36.173 "transports": [] 00:13:36.173 }, 00:13:36.173 { 00:13:36.173 "name": "nvmf_tgt_poll_group_003", 00:13:36.173 "admin_qpairs": 0, 00:13:36.173 "io_qpairs": 0, 00:13:36.173 "current_admin_qpairs": 0, 00:13:36.173 "current_io_qpairs": 0, 00:13:36.173 "pending_bdev_io": 0, 00:13:36.173 "completed_nvme_io": 0, 00:13:36.173 "transports": [] 00:13:36.173 } 00:13:36.173 ] 00:13:36.173 }' 00:13:36.173 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:36.173 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:36.173 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:36.173 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:36.173 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:36.173 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:36.173 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:36.173 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:36.173 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.173 19:53:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.173 [2024-07-24 19:53:24.004545] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:36.173 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.173 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:36.173 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.173 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.173 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.173 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:36.173 "tick_rate": 2400000000, 00:13:36.173 "poll_groups": [ 00:13:36.173 { 00:13:36.173 "name": "nvmf_tgt_poll_group_000", 00:13:36.173 "admin_qpairs": 0, 00:13:36.173 "io_qpairs": 0, 00:13:36.173 "current_admin_qpairs": 0, 00:13:36.173 "current_io_qpairs": 0, 00:13:36.173 "pending_bdev_io": 0, 00:13:36.173 "completed_nvme_io": 0, 00:13:36.173 "transports": [ 00:13:36.173 { 00:13:36.173 "trtype": "TCP" 00:13:36.173 } 00:13:36.173 ] 00:13:36.173 }, 00:13:36.173 { 00:13:36.173 "name": "nvmf_tgt_poll_group_001", 00:13:36.173 "admin_qpairs": 0, 00:13:36.173 "io_qpairs": 0, 00:13:36.173 "current_admin_qpairs": 0, 00:13:36.173 "current_io_qpairs": 0, 00:13:36.173 "pending_bdev_io": 0, 00:13:36.173 "completed_nvme_io": 0, 00:13:36.173 "transports": [ 00:13:36.173 { 00:13:36.173 "trtype": "TCP" 00:13:36.173 } 00:13:36.173 ] 00:13:36.173 }, 00:13:36.173 { 00:13:36.173 "name": "nvmf_tgt_poll_group_002", 00:13:36.173 "admin_qpairs": 0, 00:13:36.173 "io_qpairs": 0, 00:13:36.173 "current_admin_qpairs": 0, 00:13:36.173 "current_io_qpairs": 0, 00:13:36.173 "pending_bdev_io": 0, 00:13:36.173 "completed_nvme_io": 0, 00:13:36.173 "transports": [ 00:13:36.173 { 00:13:36.173 "trtype": "TCP" 00:13:36.173 } 00:13:36.173 ] 00:13:36.173 }, 00:13:36.173 { 00:13:36.173 "name": "nvmf_tgt_poll_group_003", 00:13:36.173 "admin_qpairs": 0, 00:13:36.173 "io_qpairs": 0, 00:13:36.173 "current_admin_qpairs": 0, 00:13:36.173 "current_io_qpairs": 0, 00:13:36.173 "pending_bdev_io": 0, 00:13:36.173 "completed_nvme_io": 0, 00:13:36.173 "transports": [ 00:13:36.173 { 00:13:36.173 "trtype": "TCP" 00:13:36.173 } 00:13:36.173 ] 00:13:36.173 } 00:13:36.173 ] 00:13:36.173 }' 00:13:36.173 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:36.173 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:36.173 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:36.173 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:36.173 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:36.173 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:36.173 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:36.173 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:36.173 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.434 Malloc1 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.434 [2024-07-24 19:53:24.193780] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:36.434 [2024-07-24 19:53:24.220790] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:13:36.434 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:36.434 could not add new controller: failed to write to nvme-fabrics device 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.434 19:53:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:38.347 19:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:38.347 19:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:38.347 19:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:38.347 19:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:38.347 19:53:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:40.258 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:40.258 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:40.258 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:40.258 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:40.258 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:40.258 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:40.258 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:40.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.258 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:40.258 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:40.258 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:40.258 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:40.258 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:40.258 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:40.258 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:40.258 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:40.258 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.258 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.258 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.258 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:40.258 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:40.258 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:40.258 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:13:40.258 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:40.258 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:13:40.258 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:40.258 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:13:40.258 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:40.258 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:13:40.258 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:13:40.258 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:40.258 [2024-07-24 19:53:27.997610] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:13:40.258 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:40.258 could not add new controller: failed to write to nvme-fabrics device 00:13:40.258 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:40.258 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:40.258 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:40.258 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:40.258 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:40.258 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.258 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.258 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.258 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:42.175 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:42.175 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:42.175 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:42.175 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:42.175 19:53:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:44.091 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:44.091 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:44.091 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:44.091 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:44.091 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:44.091 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:44.091 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:44.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.091 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:44.091 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:44.091 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:44.091 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:44.091 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:44.091 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:44.091 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:44.091 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:44.091 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.091 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.091 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.091 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:44.091 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:44.091 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:44.091 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.091 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.091 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.091 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:44.091 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.091 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.091 [2024-07-24 19:53:31.774924] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.091 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.091 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:44.091 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.091 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.091 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.091 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:44.091 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.091 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.091 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.091 19:53:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:45.478 19:53:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:45.478 19:53:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:45.478 19:53:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:45.478 19:53:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:45.478 19:53:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:48.023 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:48.023 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:48.023 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:48.023 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:48.023 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:48.023 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:48.023 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:48.023 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.023 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:48.024 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:48.024 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:48.024 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:48.024 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:48.024 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:48.024 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:48.024 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:48.024 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.024 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.024 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.024 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:48.024 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.024 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.024 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.024 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:48.024 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:48.024 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.024 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.024 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.024 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.024 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.024 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.024 [2024-07-24 19:53:35.526763] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.024 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.024 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:48.024 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.024 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.024 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.024 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:48.024 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.024 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.024 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.024 19:53:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:49.410 19:53:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:49.410 19:53:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:49.410 19:53:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:49.410 19:53:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:49.410 19:53:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:51.325 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:51.325 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:51.325 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:51.325 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:51.325 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:51.325 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:51.325 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:51.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.325 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:51.325 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:51.325 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:51.325 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:51.325 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:51.325 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:51.325 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:51.325 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:51.325 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.325 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.325 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.325 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:51.325 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.325 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.325 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.325 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:51.325 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:51.325 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.325 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.325 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.325 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:51.325 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.325 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.325 [2024-07-24 19:53:39.206847] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:51.325 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.325 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:51.325 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.325 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.325 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.325 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:51.325 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.325 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.325 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.325 19:53:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:53.240 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:53.240 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:53.240 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:53.240 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:53.240 19:53:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:55.155 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:55.155 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:55.155 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:55.155 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:55.155 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:55.155 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:55.155 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:55.155 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.155 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:55.155 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:55.156 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:55.156 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:55.156 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:55.156 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:55.156 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:55.156 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:55.156 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.156 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.156 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.156 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:55.156 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.156 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.156 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.156 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:55.156 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:55.156 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.156 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.156 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.156 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:55.156 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.156 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.156 [2024-07-24 19:53:42.934756] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:55.156 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.156 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:55.156 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.156 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.156 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.156 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:55.156 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.156 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.156 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.156 19:53:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:57.113 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:57.113 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:57.113 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:57.113 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:57.113 19:53:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:59.028 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:59.028 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:59.028 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:59.028 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:59.028 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:59.028 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:59.028 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:59.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.028 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:59.028 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:59.028 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:59.028 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:59.028 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:59.028 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:59.028 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:59.028 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:59.028 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.028 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.028 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.028 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:59.028 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.028 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.028 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.028 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:59.028 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:59.028 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.028 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.028 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.028 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:59.028 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.028 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.028 [2024-07-24 19:53:46.685926] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:59.028 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.028 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:59.028 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.028 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.028 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.028 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:59.028 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.028 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.028 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.028 19:53:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:00.414 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:00.414 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:00.414 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:00.414 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:00.414 19:53:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:02.330 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:02.330 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:02.330 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:02.591 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:02.591 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:02.591 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:02.591 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:02.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.591 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:02.591 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:02.591 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:02.591 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:02.591 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:02.591 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:02.591 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:02.591 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:02.591 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.591 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.591 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.592 [2024-07-24 19:53:50.447711] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.592 [2024-07-24 19:53:50.507863] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.592 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.854 [2024-07-24 19:53:50.572044] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.854 [2024-07-24 19:53:50.632251] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.854 [2024-07-24 19:53:50.696454] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:02.854 "tick_rate": 2400000000, 00:14:02.854 "poll_groups": [ 00:14:02.854 { 00:14:02.854 "name": "nvmf_tgt_poll_group_000", 00:14:02.854 "admin_qpairs": 0, 00:14:02.854 "io_qpairs": 224, 00:14:02.854 "current_admin_qpairs": 0, 00:14:02.854 "current_io_qpairs": 0, 00:14:02.854 "pending_bdev_io": 0, 00:14:02.854 "completed_nvme_io": 306, 00:14:02.854 "transports": [ 00:14:02.854 { 00:14:02.854 "trtype": "TCP" 00:14:02.854 } 00:14:02.854 ] 00:14:02.854 }, 00:14:02.854 { 00:14:02.854 "name": "nvmf_tgt_poll_group_001", 00:14:02.854 "admin_qpairs": 1, 00:14:02.854 "io_qpairs": 223, 00:14:02.854 "current_admin_qpairs": 0, 00:14:02.854 "current_io_qpairs": 0, 00:14:02.854 "pending_bdev_io": 0, 00:14:02.854 "completed_nvme_io": 224, 00:14:02.854 "transports": [ 00:14:02.854 { 00:14:02.854 "trtype": "TCP" 00:14:02.854 } 00:14:02.854 ] 00:14:02.854 }, 00:14:02.854 { 00:14:02.854 "name": "nvmf_tgt_poll_group_002", 00:14:02.854 "admin_qpairs": 6, 00:14:02.854 "io_qpairs": 218, 00:14:02.854 "current_admin_qpairs": 0, 00:14:02.854 "current_io_qpairs": 0, 00:14:02.854 "pending_bdev_io": 0, 00:14:02.854 "completed_nvme_io": 420, 00:14:02.854 "transports": [ 00:14:02.854 { 00:14:02.854 "trtype": "TCP" 00:14:02.854 } 00:14:02.854 ] 00:14:02.854 }, 00:14:02.854 { 00:14:02.854 "name": "nvmf_tgt_poll_group_003", 00:14:02.854 "admin_qpairs": 0, 00:14:02.854 "io_qpairs": 224, 00:14:02.854 "current_admin_qpairs": 0, 00:14:02.854 "current_io_qpairs": 0, 00:14:02.854 "pending_bdev_io": 0, 00:14:02.854 "completed_nvme_io": 289, 00:14:02.854 "transports": [ 00:14:02.854 { 00:14:02.854 "trtype": "TCP" 00:14:02.854 } 00:14:02.854 ] 00:14:02.854 } 00:14:02.854 ] 00:14:02.854 }' 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:02.854 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:03.115 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:03.115 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:03.115 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:03.115 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:03.115 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:03.115 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:14:03.115 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:03.115 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:03.115 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:03.115 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:03.115 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:14:03.115 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:03.115 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:14:03.115 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:03.115 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:03.115 rmmod nvme_tcp 00:14:03.115 rmmod nvme_fabrics 00:14:03.115 rmmod nvme_keyring 00:14:03.115 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:03.116 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:14:03.116 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:14:03.116 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3607907 ']' 00:14:03.116 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3607907 00:14:03.116 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 3607907 ']' 00:14:03.116 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 3607907 00:14:03.116 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:14:03.116 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:03.116 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3607907 00:14:03.116 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:03.116 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:03.116 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3607907' 00:14:03.116 killing process with pid 3607907 00:14:03.116 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 3607907 00:14:03.116 19:53:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 3607907 00:14:03.376 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:03.376 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:03.376 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:03.376 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:03.376 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:03.376 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.376 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:03.376 19:53:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.289 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:05.289 00:14:05.289 real 0m37.448s 00:14:05.289 user 1m53.666s 00:14:05.289 sys 0m7.109s 00:14:05.289 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:05.289 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.289 ************************************ 00:14:05.289 END TEST nvmf_rpc 00:14:05.289 ************************************ 00:14:05.289 19:53:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:05.289 19:53:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:05.289 19:53:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:05.289 19:53:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:05.550 ************************************ 00:14:05.550 START TEST nvmf_invalid 00:14:05.550 ************************************ 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:05.550 * Looking for test storage... 00:14:05.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:14:05.550 19:53:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:12.136 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:12.136 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:14:12.136 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:12.136 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:12.136 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:12.136 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:12.136 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:12.136 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:14:12.136 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:12.136 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:14:12.136 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:14:12.136 19:53:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:12.136 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:12.136 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:12.136 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:12.136 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:12.136 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:12.397 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:12.397 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:12.397 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:12.397 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:12.397 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:12.397 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:12.397 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:12.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:12.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.732 ms 00:14:12.397 00:14:12.397 --- 10.0.0.2 ping statistics --- 00:14:12.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.397 rtt min/avg/max/mdev = 0.732/0.732/0.732/0.000 ms 00:14:12.397 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:12.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:12.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.366 ms 00:14:12.397 00:14:12.397 --- 10.0.0.1 ping statistics --- 00:14:12.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.397 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:14:12.397 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:12.397 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:14:12.397 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:12.397 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:12.397 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:12.398 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:12.398 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:12.398 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:12.398 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:12.658 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:12.658 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:12.658 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:12.658 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:12.658 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3617773 00:14:12.658 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3617773 00:14:12.658 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:12.658 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 3617773 ']' 00:14:12.658 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.658 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:12.658 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.658 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:12.658 19:54:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:12.659 [2024-07-24 19:54:00.441558] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:14:12.659 [2024-07-24 19:54:00.441608] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:12.659 EAL: No free 2048 kB hugepages reported on node 1 00:14:12.659 [2024-07-24 19:54:00.508942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:12.659 [2024-07-24 19:54:00.574177] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:12.659 [2024-07-24 19:54:00.574220] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:12.659 [2024-07-24 19:54:00.574227] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:12.659 [2024-07-24 19:54:00.574234] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:12.659 [2024-07-24 19:54:00.574239] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:12.659 [2024-07-24 19:54:00.574335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.659 [2024-07-24 19:54:00.574600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:12.659 [2024-07-24 19:54:00.574758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.659 [2024-07-24 19:54:00.574758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:13.599 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:13.599 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:14:13.599 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:13.599 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:13.599 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:13.599 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:13.599 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:13.599 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode14662 00:14:13.599 [2024-07-24 19:54:01.400607] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:13.599 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:13.599 { 00:14:13.599 "nqn": "nqn.2016-06.io.spdk:cnode14662", 00:14:13.599 "tgt_name": "foobar", 00:14:13.599 "method": "nvmf_create_subsystem", 00:14:13.599 "req_id": 1 00:14:13.599 } 00:14:13.599 Got JSON-RPC error response 00:14:13.599 response: 00:14:13.599 { 00:14:13.599 "code": -32603, 00:14:13.599 "message": "Unable to find target foobar" 00:14:13.599 }' 00:14:13.599 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:13.599 { 00:14:13.599 "nqn": "nqn.2016-06.io.spdk:cnode14662", 00:14:13.599 "tgt_name": "foobar", 00:14:13.599 "method": "nvmf_create_subsystem", 00:14:13.599 "req_id": 1 00:14:13.599 } 00:14:13.599 Got JSON-RPC error response 00:14:13.599 response: 00:14:13.599 { 00:14:13.599 "code": -32603, 00:14:13.599 "message": "Unable to find target foobar" 00:14:13.599 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:13.599 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:13.599 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode3165 00:14:13.860 [2024-07-24 19:54:01.581277] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3165: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:13.860 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:13.860 { 00:14:13.860 "nqn": "nqn.2016-06.io.spdk:cnode3165", 00:14:13.860 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:13.860 "method": "nvmf_create_subsystem", 00:14:13.860 "req_id": 1 00:14:13.860 } 00:14:13.860 Got JSON-RPC error response 00:14:13.860 response: 00:14:13.860 { 00:14:13.860 "code": -32602, 00:14:13.860 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:13.860 }' 00:14:13.860 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:13.860 { 00:14:13.860 "nqn": "nqn.2016-06.io.spdk:cnode3165", 00:14:13.860 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:13.860 "method": "nvmf_create_subsystem", 00:14:13.860 "req_id": 1 00:14:13.860 } 00:14:13.860 Got JSON-RPC error response 00:14:13.860 response: 00:14:13.860 { 00:14:13.860 "code": -32602, 00:14:13.860 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:13.860 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:13.860 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:13.860 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode17916 00:14:13.860 [2024-07-24 19:54:01.757776] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17916: invalid model number 'SPDK_Controller' 00:14:13.860 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:13.860 { 00:14:13.860 "nqn": "nqn.2016-06.io.spdk:cnode17916", 00:14:13.860 "model_number": "SPDK_Controller\u001f", 00:14:13.860 "method": "nvmf_create_subsystem", 00:14:13.860 "req_id": 1 00:14:13.860 } 00:14:13.860 Got JSON-RPC error response 00:14:13.860 response: 00:14:13.860 { 00:14:13.860 "code": -32602, 00:14:13.860 "message": "Invalid MN SPDK_Controller\u001f" 00:14:13.860 }' 00:14:13.860 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:13.860 { 00:14:13.860 "nqn": "nqn.2016-06.io.spdk:cnode17916", 00:14:13.860 "model_number": "SPDK_Controller\u001f", 00:14:13.860 "method": "nvmf_create_subsystem", 00:14:13.860 "req_id": 1 00:14:13.860 } 00:14:13.860 Got JSON-RPC error response 00:14:13.860 response: 00:14:13.860 { 00:14:13.860 "code": -32602, 00:14:13.860 "message": "Invalid MN SPDK_Controller\u001f" 00:14:13.860 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:13.860 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:13.860 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:13.860 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:13.860 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:13.860 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:13.860 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:13.860 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:13.860 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:14:13.860 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:14:13.860 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:14:13.860 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:13.860 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:13.860 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:14:13.860 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:14:13.860 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:14:13.860 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:13.860 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:13.860 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ R == \- ]] 00:14:14.122 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'R\1(=R5sOXu*N.q$O7+4' 00:14:14.123 19:54:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'R\1(=R5sOXu*N.q$O7+4' nqn.2016-06.io.spdk:cnode7134 00:14:14.384 [2024-07-24 19:54:02.090867] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7134: invalid serial number 'R\1(=R5sOXu*N.q$O7+4' 00:14:14.384 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:14:14.384 { 00:14:14.384 "nqn": "nqn.2016-06.io.spdk:cnode7134", 00:14:14.384 "serial_number": "R\\1(=R5sOXu\u007f*N.q$O7+4", 00:14:14.384 "method": "nvmf_create_subsystem", 00:14:14.384 "req_id": 1 00:14:14.384 } 00:14:14.384 Got JSON-RPC error response 00:14:14.384 response: 00:14:14.384 { 00:14:14.384 "code": -32602, 00:14:14.384 "message": "Invalid SN R\\1(=R5sOXu\u007f*N.q$O7+4" 00:14:14.384 }' 00:14:14.384 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:14:14.384 { 00:14:14.384 "nqn": "nqn.2016-06.io.spdk:cnode7134", 00:14:14.384 "serial_number": "R\\1(=R5sOXu\u007f*N.q$O7+4", 00:14:14.384 "method": "nvmf_create_subsystem", 00:14:14.384 "req_id": 1 00:14:14.384 } 00:14:14.384 Got JSON-RPC error response 00:14:14.384 response: 00:14:14.384 { 00:14:14.384 "code": -32602, 00:14:14.384 "message": "Invalid SN R\\1(=R5sOXu\u007f*N.q$O7+4" 00:14:14.384 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:14.384 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:14.384 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:14.384 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:14.384 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:14.384 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:14.384 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:14.384 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.384 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:14:14.384 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:14:14.384 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:14:14.384 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.384 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.384 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:14:14.384 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:14:14.384 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:14:14.384 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.384 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.384 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:14:14.385 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:14:14.386 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:14:14.386 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.386 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.386 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:14:14.386 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:14:14.386 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:14:14.386 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.386 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.386 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:14:14.386 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:14:14.386 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:14:14.386 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.386 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.386 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:14:14.386 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:14.386 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:14:14.386 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.386 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.386 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:14:14.386 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:14:14.386 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:14:14.386 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.386 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:14:14.648 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:14:14.649 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:14:14.649 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:14.649 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:14.649 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ~ == \- ]] 00:14:14.649 19:54:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '~4}=NGM5"8tJF.?``V|xmWVs0'\''J4#v% /dev/null' 00:14:16.474 19:54:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.022 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:19.022 00:14:19.022 real 0m13.155s 00:14:19.022 user 0m19.196s 00:14:19.022 sys 0m6.143s 00:14:19.022 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:19.022 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:19.022 ************************************ 00:14:19.022 END TEST nvmf_invalid 00:14:19.022 ************************************ 00:14:19.022 19:54:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:19.022 19:54:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:19.022 19:54:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:19.022 19:54:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:19.022 ************************************ 00:14:19.022 START TEST nvmf_connect_stress 00:14:19.022 ************************************ 00:14:19.022 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:19.022 * Looking for test storage... 00:14:19.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:19.022 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:19.022 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:19.022 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:19.022 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:19.022 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:19.022 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:19.022 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:19.022 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:19.022 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:19.022 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:19.023 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:19.023 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:19.023 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:19.023 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:19.023 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:19.023 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:19.023 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:19.023 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:19.023 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:19.023 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:19.023 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:19.023 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:19.023 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.023 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.023 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.023 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:19.023 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.023 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:14:19.023 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:19.023 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:19.023 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:19.023 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:19.023 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:19.023 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:19.023 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:19.023 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:19.023 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:19.023 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:19.023 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:19.023 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:19.023 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:19.023 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:19.023 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.023 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:19.023 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.023 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:19.023 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:19.023 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:19.023 19:54:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:25.671 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:25.671 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:25.671 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:25.671 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:25.671 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:25.932 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:25.932 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:25.932 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:25.932 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:25.932 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:25.932 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:25.933 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:25.933 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:25.933 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.711 ms 00:14:25.933 00:14:25.933 --- 10.0.0.2 ping statistics --- 00:14:25.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.933 rtt min/avg/max/mdev = 0.711/0.711/0.711/0.000 ms 00:14:25.933 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:25.933 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:25.933 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.339 ms 00:14:25.933 00:14:25.933 --- 10.0.0.1 ping statistics --- 00:14:25.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.933 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:14:25.933 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:25.933 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:14:25.933 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:25.933 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:25.933 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:25.933 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:25.933 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:25.933 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:25.933 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:26.194 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:26.194 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:26.194 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:26.194 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:26.194 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3623284 00:14:26.194 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3623284 00:14:26.194 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:26.194 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 3623284 ']' 00:14:26.194 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.194 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:26.195 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.195 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:26.195 19:54:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:26.195 [2024-07-24 19:54:13.989062] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:14:26.195 [2024-07-24 19:54:13.989129] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:26.195 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.195 [2024-07-24 19:54:14.080441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:26.455 [2024-07-24 19:54:14.171032] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:26.455 [2024-07-24 19:54:14.171093] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:26.455 [2024-07-24 19:54:14.171101] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:26.455 [2024-07-24 19:54:14.171109] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:26.455 [2024-07-24 19:54:14.171115] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:26.455 [2024-07-24 19:54:14.171192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:26.455 [2024-07-24 19:54:14.171370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:26.455 [2024-07-24 19:54:14.171509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:27.026 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:27.026 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:14:27.026 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:27.026 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:27.027 [2024-07-24 19:54:14.806746] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:27.027 [2024-07-24 19:54:14.852132] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:27.027 NULL1 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3623514 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:27.027 EAL: No free 2048 kB hugepages reported on node 1 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3623514 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.027 19:54:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:27.598 19:54:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.598 19:54:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3623514 00:14:27.598 19:54:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:27.598 19:54:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.598 19:54:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:27.859 19:54:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.859 19:54:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3623514 00:14:27.859 19:54:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:27.859 19:54:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.859 19:54:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:28.119 19:54:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.119 19:54:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3623514 00:14:28.119 19:54:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.119 19:54:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.119 19:54:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:28.380 19:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.380 19:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3623514 00:14:28.380 19:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.380 19:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.380 19:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:28.641 19:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.902 19:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3623514 00:14:28.902 19:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.902 19:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.902 19:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:29.163 19:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.163 19:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3623514 00:14:29.163 19:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:29.163 19:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.163 19:54:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:29.423 19:54:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.423 19:54:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3623514 00:14:29.423 19:54:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:29.423 19:54:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.423 19:54:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:29.684 19:54:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.685 19:54:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3623514 00:14:29.685 19:54:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:29.685 19:54:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.685 19:54:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:29.946 19:54:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.947 19:54:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3623514 00:14:29.947 19:54:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:29.947 19:54:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.947 19:54:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:30.518 19:54:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.518 19:54:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3623514 00:14:30.518 19:54:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:30.518 19:54:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.518 19:54:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:30.779 19:54:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.779 19:54:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3623514 00:14:30.779 19:54:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:30.779 19:54:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.779 19:54:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.040 19:54:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.040 19:54:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3623514 00:14:31.040 19:54:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.040 19:54:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.040 19:54:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.300 19:54:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.300 19:54:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3623514 00:14:31.300 19:54:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.300 19:54:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.300 19:54:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.872 19:54:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.872 19:54:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3623514 00:14:31.872 19:54:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.872 19:54:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.872 19:54:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.133 19:54:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.133 19:54:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3623514 00:14:32.133 19:54:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.133 19:54:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.133 19:54:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.394 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.394 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3623514 00:14:32.394 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.394 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.394 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.655 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.655 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3623514 00:14:32.655 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.655 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.655 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.916 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.916 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3623514 00:14:32.916 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.916 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.916 19:54:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:33.488 19:54:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.488 19:54:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3623514 00:14:33.488 19:54:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.488 19:54:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.488 19:54:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:33.749 19:54:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.749 19:54:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3623514 00:14:33.749 19:54:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.749 19:54:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.749 19:54:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.010 19:54:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.010 19:54:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3623514 00:14:34.010 19:54:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.010 19:54:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.010 19:54:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.271 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.271 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3623514 00:14:34.271 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.271 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.271 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.532 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.532 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3623514 00:14:34.532 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.532 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.532 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:35.105 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.105 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3623514 00:14:35.105 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.105 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.105 19:54:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:35.366 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.366 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3623514 00:14:35.366 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.366 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.366 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:35.626 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.626 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3623514 00:14:35.626 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.626 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.626 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:35.886 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.886 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3623514 00:14:35.886 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.886 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.886 19:54:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:36.147 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.147 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3623514 00:14:36.147 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.147 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.147 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:36.720 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.720 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3623514 00:14:36.720 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.720 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.720 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:36.981 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.981 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3623514 00:14:36.981 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.981 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.981 19:54:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.242 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:37.242 19:54:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.242 19:54:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3623514 00:14:37.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3623514) - No such process 00:14:37.242 19:54:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3623514 00:14:37.242 19:54:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:37.242 19:54:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:37.242 19:54:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:37.242 19:54:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:37.242 19:54:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:37.242 19:54:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:37.242 19:54:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:37.242 19:54:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:37.242 19:54:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:37.242 rmmod nvme_tcp 00:14:37.242 rmmod nvme_fabrics 00:14:37.242 rmmod nvme_keyring 00:14:37.242 19:54:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:37.242 19:54:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:37.242 19:54:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:37.242 19:54:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3623284 ']' 00:14:37.242 19:54:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3623284 00:14:37.242 19:54:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 3623284 ']' 00:14:37.242 19:54:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 3623284 00:14:37.242 19:54:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:14:37.242 19:54:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:37.242 19:54:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3623284 00:14:37.242 19:54:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:37.242 19:54:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:37.242 19:54:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3623284' 00:14:37.242 killing process with pid 3623284 00:14:37.242 19:54:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 3623284 00:14:37.242 19:54:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 3623284 00:14:37.503 19:54:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:37.503 19:54:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:37.503 19:54:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:37.503 19:54:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:37.503 19:54:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:37.503 19:54:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.503 19:54:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:37.503 19:54:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:39.417 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:39.417 00:14:39.417 real 0m20.871s 00:14:39.417 user 0m42.081s 00:14:39.417 sys 0m8.689s 00:14:39.417 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:39.417 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.417 ************************************ 00:14:39.417 END TEST nvmf_connect_stress 00:14:39.417 ************************************ 00:14:39.678 19:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:39.678 19:54:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:39.678 19:54:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:39.678 19:54:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:39.678 ************************************ 00:14:39.678 START TEST nvmf_fused_ordering 00:14:39.678 ************************************ 00:14:39.678 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:39.678 * Looking for test storage... 00:14:39.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:39.678 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:39.678 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:39.679 19:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:47.867 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:47.867 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:47.867 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:47.867 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:47.867 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:47.867 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:47.867 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:47.867 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:47.867 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:47.867 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:47.867 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:47.867 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:47.867 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:47.867 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:47.867 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:47.867 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:47.868 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:47.868 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:47.868 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:47.868 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:47.868 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:47.868 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:14:47.868 00:14:47.868 --- 10.0.0.2 ping statistics --- 00:14:47.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.868 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:47.868 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:47.868 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.451 ms 00:14:47.868 00:14:47.868 --- 10.0.0.1 ping statistics --- 00:14:47.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.868 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3629853 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3629853 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:47.868 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 3629853 ']' 00:14:47.869 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.869 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:47.869 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.869 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:47.869 19:54:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:47.869 [2024-07-24 19:54:34.918884] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:14:47.869 [2024-07-24 19:54:34.918952] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:47.869 EAL: No free 2048 kB hugepages reported on node 1 00:14:47.869 [2024-07-24 19:54:35.005707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.869 [2024-07-24 19:54:35.097462] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:47.869 [2024-07-24 19:54:35.097521] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:47.869 [2024-07-24 19:54:35.097529] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:47.869 [2024-07-24 19:54:35.097536] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:47.869 [2024-07-24 19:54:35.097542] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:47.869 [2024-07-24 19:54:35.097579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:47.869 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:47.869 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:14:47.869 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:47.869 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:47.869 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:47.869 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:47.869 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:47.869 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.869 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:47.869 [2024-07-24 19:54:35.753299] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:47.869 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.869 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:47.869 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.869 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:47.869 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.869 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:47.869 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.869 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:47.869 [2024-07-24 19:54:35.777571] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:47.869 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.869 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:47.869 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.869 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:47.869 NULL1 00:14:47.869 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.869 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:47.869 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.869 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:47.869 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.869 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:47.869 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.869 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:47.869 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.869 19:54:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:48.130 [2024-07-24 19:54:35.848459] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:14:48.130 [2024-07-24 19:54:35.848507] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3629884 ] 00:14:48.130 EAL: No free 2048 kB hugepages reported on node 1 00:14:48.700 Attached to nqn.2016-06.io.spdk:cnode1 00:14:48.700 Namespace ID: 1 size: 1GB 00:14:48.700 fused_ordering(0) 00:14:48.700 fused_ordering(1) 00:14:48.700 fused_ordering(2) 00:14:48.700 fused_ordering(3) 00:14:48.700 fused_ordering(4) 00:14:48.700 fused_ordering(5) 00:14:48.700 fused_ordering(6) 00:14:48.700 fused_ordering(7) 00:14:48.700 fused_ordering(8) 00:14:48.700 fused_ordering(9) 00:14:48.700 fused_ordering(10) 00:14:48.700 fused_ordering(11) 00:14:48.700 fused_ordering(12) 00:14:48.700 fused_ordering(13) 00:14:48.700 fused_ordering(14) 00:14:48.700 fused_ordering(15) 00:14:48.700 fused_ordering(16) 00:14:48.700 fused_ordering(17) 00:14:48.700 fused_ordering(18) 00:14:48.700 fused_ordering(19) 00:14:48.700 fused_ordering(20) 00:14:48.700 fused_ordering(21) 00:14:48.700 fused_ordering(22) 00:14:48.700 fused_ordering(23) 00:14:48.700 fused_ordering(24) 00:14:48.700 fused_ordering(25) 00:14:48.700 fused_ordering(26) 00:14:48.700 fused_ordering(27) 00:14:48.700 fused_ordering(28) 00:14:48.700 fused_ordering(29) 00:14:48.700 fused_ordering(30) 00:14:48.700 fused_ordering(31) 00:14:48.700 fused_ordering(32) 00:14:48.700 fused_ordering(33) 00:14:48.700 fused_ordering(34) 00:14:48.700 fused_ordering(35) 00:14:48.700 fused_ordering(36) 00:14:48.700 fused_ordering(37) 00:14:48.700 fused_ordering(38) 00:14:48.700 fused_ordering(39) 00:14:48.700 fused_ordering(40) 00:14:48.700 fused_ordering(41) 00:14:48.700 fused_ordering(42) 00:14:48.700 fused_ordering(43) 00:14:48.700 fused_ordering(44) 00:14:48.700 fused_ordering(45) 00:14:48.700 fused_ordering(46) 00:14:48.700 fused_ordering(47) 00:14:48.700 fused_ordering(48) 00:14:48.700 fused_ordering(49) 00:14:48.700 fused_ordering(50) 00:14:48.700 fused_ordering(51) 00:14:48.700 fused_ordering(52) 00:14:48.700 fused_ordering(53) 00:14:48.700 fused_ordering(54) 00:14:48.700 fused_ordering(55) 00:14:48.700 fused_ordering(56) 00:14:48.700 fused_ordering(57) 00:14:48.700 fused_ordering(58) 00:14:48.700 fused_ordering(59) 00:14:48.700 fused_ordering(60) 00:14:48.700 fused_ordering(61) 00:14:48.700 fused_ordering(62) 00:14:48.700 fused_ordering(63) 00:14:48.700 fused_ordering(64) 00:14:48.700 fused_ordering(65) 00:14:48.700 fused_ordering(66) 00:14:48.700 fused_ordering(67) 00:14:48.700 fused_ordering(68) 00:14:48.700 fused_ordering(69) 00:14:48.700 fused_ordering(70) 00:14:48.700 fused_ordering(71) 00:14:48.700 fused_ordering(72) 00:14:48.700 fused_ordering(73) 00:14:48.700 fused_ordering(74) 00:14:48.700 fused_ordering(75) 00:14:48.700 fused_ordering(76) 00:14:48.700 fused_ordering(77) 00:14:48.700 fused_ordering(78) 00:14:48.700 fused_ordering(79) 00:14:48.700 fused_ordering(80) 00:14:48.700 fused_ordering(81) 00:14:48.700 fused_ordering(82) 00:14:48.700 fused_ordering(83) 00:14:48.700 fused_ordering(84) 00:14:48.700 fused_ordering(85) 00:14:48.700 fused_ordering(86) 00:14:48.700 fused_ordering(87) 00:14:48.700 fused_ordering(88) 00:14:48.700 fused_ordering(89) 00:14:48.700 fused_ordering(90) 00:14:48.700 fused_ordering(91) 00:14:48.700 fused_ordering(92) 00:14:48.700 fused_ordering(93) 00:14:48.700 fused_ordering(94) 00:14:48.700 fused_ordering(95) 00:14:48.700 fused_ordering(96) 00:14:48.700 fused_ordering(97) 00:14:48.700 fused_ordering(98) 00:14:48.700 fused_ordering(99) 00:14:48.700 fused_ordering(100) 00:14:48.700 fused_ordering(101) 00:14:48.700 fused_ordering(102) 00:14:48.700 fused_ordering(103) 00:14:48.700 fused_ordering(104) 00:14:48.700 fused_ordering(105) 00:14:48.700 fused_ordering(106) 00:14:48.700 fused_ordering(107) 00:14:48.700 fused_ordering(108) 00:14:48.700 fused_ordering(109) 00:14:48.700 fused_ordering(110) 00:14:48.700 fused_ordering(111) 00:14:48.700 fused_ordering(112) 00:14:48.700 fused_ordering(113) 00:14:48.700 fused_ordering(114) 00:14:48.700 fused_ordering(115) 00:14:48.700 fused_ordering(116) 00:14:48.700 fused_ordering(117) 00:14:48.700 fused_ordering(118) 00:14:48.700 fused_ordering(119) 00:14:48.700 fused_ordering(120) 00:14:48.700 fused_ordering(121) 00:14:48.700 fused_ordering(122) 00:14:48.700 fused_ordering(123) 00:14:48.700 fused_ordering(124) 00:14:48.700 fused_ordering(125) 00:14:48.700 fused_ordering(126) 00:14:48.700 fused_ordering(127) 00:14:48.701 fused_ordering(128) 00:14:48.701 fused_ordering(129) 00:14:48.701 fused_ordering(130) 00:14:48.701 fused_ordering(131) 00:14:48.701 fused_ordering(132) 00:14:48.701 fused_ordering(133) 00:14:48.701 fused_ordering(134) 00:14:48.701 fused_ordering(135) 00:14:48.701 fused_ordering(136) 00:14:48.701 fused_ordering(137) 00:14:48.701 fused_ordering(138) 00:14:48.701 fused_ordering(139) 00:14:48.701 fused_ordering(140) 00:14:48.701 fused_ordering(141) 00:14:48.701 fused_ordering(142) 00:14:48.701 fused_ordering(143) 00:14:48.701 fused_ordering(144) 00:14:48.701 fused_ordering(145) 00:14:48.701 fused_ordering(146) 00:14:48.701 fused_ordering(147) 00:14:48.701 fused_ordering(148) 00:14:48.701 fused_ordering(149) 00:14:48.701 fused_ordering(150) 00:14:48.701 fused_ordering(151) 00:14:48.701 fused_ordering(152) 00:14:48.701 fused_ordering(153) 00:14:48.701 fused_ordering(154) 00:14:48.701 fused_ordering(155) 00:14:48.701 fused_ordering(156) 00:14:48.701 fused_ordering(157) 00:14:48.701 fused_ordering(158) 00:14:48.701 fused_ordering(159) 00:14:48.701 fused_ordering(160) 00:14:48.701 fused_ordering(161) 00:14:48.701 fused_ordering(162) 00:14:48.701 fused_ordering(163) 00:14:48.701 fused_ordering(164) 00:14:48.701 fused_ordering(165) 00:14:48.701 fused_ordering(166) 00:14:48.701 fused_ordering(167) 00:14:48.701 fused_ordering(168) 00:14:48.701 fused_ordering(169) 00:14:48.701 fused_ordering(170) 00:14:48.701 fused_ordering(171) 00:14:48.701 fused_ordering(172) 00:14:48.701 fused_ordering(173) 00:14:48.701 fused_ordering(174) 00:14:48.701 fused_ordering(175) 00:14:48.701 fused_ordering(176) 00:14:48.701 fused_ordering(177) 00:14:48.701 fused_ordering(178) 00:14:48.701 fused_ordering(179) 00:14:48.701 fused_ordering(180) 00:14:48.701 fused_ordering(181) 00:14:48.701 fused_ordering(182) 00:14:48.701 fused_ordering(183) 00:14:48.701 fused_ordering(184) 00:14:48.701 fused_ordering(185) 00:14:48.701 fused_ordering(186) 00:14:48.701 fused_ordering(187) 00:14:48.701 fused_ordering(188) 00:14:48.701 fused_ordering(189) 00:14:48.701 fused_ordering(190) 00:14:48.701 fused_ordering(191) 00:14:48.701 fused_ordering(192) 00:14:48.701 fused_ordering(193) 00:14:48.701 fused_ordering(194) 00:14:48.701 fused_ordering(195) 00:14:48.701 fused_ordering(196) 00:14:48.701 fused_ordering(197) 00:14:48.701 fused_ordering(198) 00:14:48.701 fused_ordering(199) 00:14:48.701 fused_ordering(200) 00:14:48.701 fused_ordering(201) 00:14:48.701 fused_ordering(202) 00:14:48.701 fused_ordering(203) 00:14:48.701 fused_ordering(204) 00:14:48.701 fused_ordering(205) 00:14:48.961 fused_ordering(206) 00:14:48.961 fused_ordering(207) 00:14:48.961 fused_ordering(208) 00:14:48.961 fused_ordering(209) 00:14:48.961 fused_ordering(210) 00:14:48.961 fused_ordering(211) 00:14:48.961 fused_ordering(212) 00:14:48.961 fused_ordering(213) 00:14:48.961 fused_ordering(214) 00:14:48.961 fused_ordering(215) 00:14:48.961 fused_ordering(216) 00:14:48.961 fused_ordering(217) 00:14:48.961 fused_ordering(218) 00:14:48.961 fused_ordering(219) 00:14:48.961 fused_ordering(220) 00:14:48.961 fused_ordering(221) 00:14:48.961 fused_ordering(222) 00:14:48.961 fused_ordering(223) 00:14:48.961 fused_ordering(224) 00:14:48.961 fused_ordering(225) 00:14:48.961 fused_ordering(226) 00:14:48.961 fused_ordering(227) 00:14:48.961 fused_ordering(228) 00:14:48.961 fused_ordering(229) 00:14:48.961 fused_ordering(230) 00:14:48.961 fused_ordering(231) 00:14:48.961 fused_ordering(232) 00:14:48.961 fused_ordering(233) 00:14:48.961 fused_ordering(234) 00:14:48.961 fused_ordering(235) 00:14:48.961 fused_ordering(236) 00:14:48.961 fused_ordering(237) 00:14:48.961 fused_ordering(238) 00:14:48.961 fused_ordering(239) 00:14:48.961 fused_ordering(240) 00:14:48.961 fused_ordering(241) 00:14:48.961 fused_ordering(242) 00:14:48.961 fused_ordering(243) 00:14:48.961 fused_ordering(244) 00:14:48.961 fused_ordering(245) 00:14:48.961 fused_ordering(246) 00:14:48.961 fused_ordering(247) 00:14:48.961 fused_ordering(248) 00:14:48.961 fused_ordering(249) 00:14:48.961 fused_ordering(250) 00:14:48.961 fused_ordering(251) 00:14:48.961 fused_ordering(252) 00:14:48.961 fused_ordering(253) 00:14:48.961 fused_ordering(254) 00:14:48.961 fused_ordering(255) 00:14:48.961 fused_ordering(256) 00:14:48.961 fused_ordering(257) 00:14:48.961 fused_ordering(258) 00:14:48.961 fused_ordering(259) 00:14:48.961 fused_ordering(260) 00:14:48.961 fused_ordering(261) 00:14:48.961 fused_ordering(262) 00:14:48.961 fused_ordering(263) 00:14:48.961 fused_ordering(264) 00:14:48.961 fused_ordering(265) 00:14:48.961 fused_ordering(266) 00:14:48.961 fused_ordering(267) 00:14:48.961 fused_ordering(268) 00:14:48.961 fused_ordering(269) 00:14:48.961 fused_ordering(270) 00:14:48.961 fused_ordering(271) 00:14:48.961 fused_ordering(272) 00:14:48.961 fused_ordering(273) 00:14:48.961 fused_ordering(274) 00:14:48.961 fused_ordering(275) 00:14:48.961 fused_ordering(276) 00:14:48.961 fused_ordering(277) 00:14:48.961 fused_ordering(278) 00:14:48.961 fused_ordering(279) 00:14:48.961 fused_ordering(280) 00:14:48.961 fused_ordering(281) 00:14:48.961 fused_ordering(282) 00:14:48.961 fused_ordering(283) 00:14:48.961 fused_ordering(284) 00:14:48.961 fused_ordering(285) 00:14:48.961 fused_ordering(286) 00:14:48.961 fused_ordering(287) 00:14:48.961 fused_ordering(288) 00:14:48.961 fused_ordering(289) 00:14:48.961 fused_ordering(290) 00:14:48.961 fused_ordering(291) 00:14:48.961 fused_ordering(292) 00:14:48.961 fused_ordering(293) 00:14:48.961 fused_ordering(294) 00:14:48.961 fused_ordering(295) 00:14:48.961 fused_ordering(296) 00:14:48.961 fused_ordering(297) 00:14:48.961 fused_ordering(298) 00:14:48.961 fused_ordering(299) 00:14:48.961 fused_ordering(300) 00:14:48.961 fused_ordering(301) 00:14:48.961 fused_ordering(302) 00:14:48.961 fused_ordering(303) 00:14:48.961 fused_ordering(304) 00:14:48.961 fused_ordering(305) 00:14:48.961 fused_ordering(306) 00:14:48.961 fused_ordering(307) 00:14:48.961 fused_ordering(308) 00:14:48.961 fused_ordering(309) 00:14:48.961 fused_ordering(310) 00:14:48.961 fused_ordering(311) 00:14:48.961 fused_ordering(312) 00:14:48.961 fused_ordering(313) 00:14:48.961 fused_ordering(314) 00:14:48.961 fused_ordering(315) 00:14:48.961 fused_ordering(316) 00:14:48.961 fused_ordering(317) 00:14:48.961 fused_ordering(318) 00:14:48.961 fused_ordering(319) 00:14:48.961 fused_ordering(320) 00:14:48.961 fused_ordering(321) 00:14:48.961 fused_ordering(322) 00:14:48.961 fused_ordering(323) 00:14:48.961 fused_ordering(324) 00:14:48.961 fused_ordering(325) 00:14:48.961 fused_ordering(326) 00:14:48.961 fused_ordering(327) 00:14:48.961 fused_ordering(328) 00:14:48.961 fused_ordering(329) 00:14:48.961 fused_ordering(330) 00:14:48.961 fused_ordering(331) 00:14:48.961 fused_ordering(332) 00:14:48.961 fused_ordering(333) 00:14:48.961 fused_ordering(334) 00:14:48.961 fused_ordering(335) 00:14:48.961 fused_ordering(336) 00:14:48.961 fused_ordering(337) 00:14:48.961 fused_ordering(338) 00:14:48.961 fused_ordering(339) 00:14:48.961 fused_ordering(340) 00:14:48.961 fused_ordering(341) 00:14:48.961 fused_ordering(342) 00:14:48.961 fused_ordering(343) 00:14:48.961 fused_ordering(344) 00:14:48.961 fused_ordering(345) 00:14:48.961 fused_ordering(346) 00:14:48.961 fused_ordering(347) 00:14:48.961 fused_ordering(348) 00:14:48.961 fused_ordering(349) 00:14:48.961 fused_ordering(350) 00:14:48.961 fused_ordering(351) 00:14:48.961 fused_ordering(352) 00:14:48.961 fused_ordering(353) 00:14:48.961 fused_ordering(354) 00:14:48.961 fused_ordering(355) 00:14:48.961 fused_ordering(356) 00:14:48.961 fused_ordering(357) 00:14:48.961 fused_ordering(358) 00:14:48.961 fused_ordering(359) 00:14:48.961 fused_ordering(360) 00:14:48.961 fused_ordering(361) 00:14:48.961 fused_ordering(362) 00:14:48.961 fused_ordering(363) 00:14:48.961 fused_ordering(364) 00:14:48.961 fused_ordering(365) 00:14:48.961 fused_ordering(366) 00:14:48.961 fused_ordering(367) 00:14:48.961 fused_ordering(368) 00:14:48.961 fused_ordering(369) 00:14:48.961 fused_ordering(370) 00:14:48.961 fused_ordering(371) 00:14:48.961 fused_ordering(372) 00:14:48.961 fused_ordering(373) 00:14:48.961 fused_ordering(374) 00:14:48.961 fused_ordering(375) 00:14:48.961 fused_ordering(376) 00:14:48.961 fused_ordering(377) 00:14:48.961 fused_ordering(378) 00:14:48.961 fused_ordering(379) 00:14:48.961 fused_ordering(380) 00:14:48.961 fused_ordering(381) 00:14:48.961 fused_ordering(382) 00:14:48.961 fused_ordering(383) 00:14:48.961 fused_ordering(384) 00:14:48.961 fused_ordering(385) 00:14:48.961 fused_ordering(386) 00:14:48.961 fused_ordering(387) 00:14:48.961 fused_ordering(388) 00:14:48.961 fused_ordering(389) 00:14:48.961 fused_ordering(390) 00:14:48.961 fused_ordering(391) 00:14:48.961 fused_ordering(392) 00:14:48.961 fused_ordering(393) 00:14:48.961 fused_ordering(394) 00:14:48.961 fused_ordering(395) 00:14:48.961 fused_ordering(396) 00:14:48.961 fused_ordering(397) 00:14:48.961 fused_ordering(398) 00:14:48.961 fused_ordering(399) 00:14:48.961 fused_ordering(400) 00:14:48.961 fused_ordering(401) 00:14:48.961 fused_ordering(402) 00:14:48.961 fused_ordering(403) 00:14:48.961 fused_ordering(404) 00:14:48.961 fused_ordering(405) 00:14:48.961 fused_ordering(406) 00:14:48.961 fused_ordering(407) 00:14:48.961 fused_ordering(408) 00:14:48.961 fused_ordering(409) 00:14:48.961 fused_ordering(410) 00:14:49.532 fused_ordering(411) 00:14:49.532 fused_ordering(412) 00:14:49.532 fused_ordering(413) 00:14:49.532 fused_ordering(414) 00:14:49.532 fused_ordering(415) 00:14:49.532 fused_ordering(416) 00:14:49.532 fused_ordering(417) 00:14:49.532 fused_ordering(418) 00:14:49.532 fused_ordering(419) 00:14:49.532 fused_ordering(420) 00:14:49.532 fused_ordering(421) 00:14:49.532 fused_ordering(422) 00:14:49.532 fused_ordering(423) 00:14:49.532 fused_ordering(424) 00:14:49.532 fused_ordering(425) 00:14:49.532 fused_ordering(426) 00:14:49.532 fused_ordering(427) 00:14:49.532 fused_ordering(428) 00:14:49.532 fused_ordering(429) 00:14:49.532 fused_ordering(430) 00:14:49.532 fused_ordering(431) 00:14:49.532 fused_ordering(432) 00:14:49.532 fused_ordering(433) 00:14:49.532 fused_ordering(434) 00:14:49.532 fused_ordering(435) 00:14:49.532 fused_ordering(436) 00:14:49.532 fused_ordering(437) 00:14:49.532 fused_ordering(438) 00:14:49.532 fused_ordering(439) 00:14:49.532 fused_ordering(440) 00:14:49.532 fused_ordering(441) 00:14:49.532 fused_ordering(442) 00:14:49.532 fused_ordering(443) 00:14:49.532 fused_ordering(444) 00:14:49.532 fused_ordering(445) 00:14:49.532 fused_ordering(446) 00:14:49.532 fused_ordering(447) 00:14:49.532 fused_ordering(448) 00:14:49.532 fused_ordering(449) 00:14:49.532 fused_ordering(450) 00:14:49.532 fused_ordering(451) 00:14:49.532 fused_ordering(452) 00:14:49.532 fused_ordering(453) 00:14:49.532 fused_ordering(454) 00:14:49.532 fused_ordering(455) 00:14:49.532 fused_ordering(456) 00:14:49.532 fused_ordering(457) 00:14:49.532 fused_ordering(458) 00:14:49.532 fused_ordering(459) 00:14:49.532 fused_ordering(460) 00:14:49.532 fused_ordering(461) 00:14:49.532 fused_ordering(462) 00:14:49.532 fused_ordering(463) 00:14:49.532 fused_ordering(464) 00:14:49.532 fused_ordering(465) 00:14:49.532 fused_ordering(466) 00:14:49.532 fused_ordering(467) 00:14:49.532 fused_ordering(468) 00:14:49.532 fused_ordering(469) 00:14:49.532 fused_ordering(470) 00:14:49.532 fused_ordering(471) 00:14:49.532 fused_ordering(472) 00:14:49.532 fused_ordering(473) 00:14:49.532 fused_ordering(474) 00:14:49.532 fused_ordering(475) 00:14:49.532 fused_ordering(476) 00:14:49.532 fused_ordering(477) 00:14:49.532 fused_ordering(478) 00:14:49.532 fused_ordering(479) 00:14:49.532 fused_ordering(480) 00:14:49.532 fused_ordering(481) 00:14:49.532 fused_ordering(482) 00:14:49.532 fused_ordering(483) 00:14:49.532 fused_ordering(484) 00:14:49.532 fused_ordering(485) 00:14:49.532 fused_ordering(486) 00:14:49.532 fused_ordering(487) 00:14:49.532 fused_ordering(488) 00:14:49.532 fused_ordering(489) 00:14:49.532 fused_ordering(490) 00:14:49.532 fused_ordering(491) 00:14:49.532 fused_ordering(492) 00:14:49.532 fused_ordering(493) 00:14:49.532 fused_ordering(494) 00:14:49.532 fused_ordering(495) 00:14:49.532 fused_ordering(496) 00:14:49.532 fused_ordering(497) 00:14:49.532 fused_ordering(498) 00:14:49.532 fused_ordering(499) 00:14:49.532 fused_ordering(500) 00:14:49.532 fused_ordering(501) 00:14:49.532 fused_ordering(502) 00:14:49.532 fused_ordering(503) 00:14:49.532 fused_ordering(504) 00:14:49.532 fused_ordering(505) 00:14:49.532 fused_ordering(506) 00:14:49.532 fused_ordering(507) 00:14:49.532 fused_ordering(508) 00:14:49.532 fused_ordering(509) 00:14:49.532 fused_ordering(510) 00:14:49.532 fused_ordering(511) 00:14:49.532 fused_ordering(512) 00:14:49.532 fused_ordering(513) 00:14:49.532 fused_ordering(514) 00:14:49.532 fused_ordering(515) 00:14:49.532 fused_ordering(516) 00:14:49.532 fused_ordering(517) 00:14:49.532 fused_ordering(518) 00:14:49.532 fused_ordering(519) 00:14:49.532 fused_ordering(520) 00:14:49.532 fused_ordering(521) 00:14:49.532 fused_ordering(522) 00:14:49.532 fused_ordering(523) 00:14:49.532 fused_ordering(524) 00:14:49.532 fused_ordering(525) 00:14:49.532 fused_ordering(526) 00:14:49.532 fused_ordering(527) 00:14:49.532 fused_ordering(528) 00:14:49.532 fused_ordering(529) 00:14:49.532 fused_ordering(530) 00:14:49.532 fused_ordering(531) 00:14:49.532 fused_ordering(532) 00:14:49.532 fused_ordering(533) 00:14:49.532 fused_ordering(534) 00:14:49.532 fused_ordering(535) 00:14:49.532 fused_ordering(536) 00:14:49.532 fused_ordering(537) 00:14:49.532 fused_ordering(538) 00:14:49.532 fused_ordering(539) 00:14:49.532 fused_ordering(540) 00:14:49.532 fused_ordering(541) 00:14:49.532 fused_ordering(542) 00:14:49.532 fused_ordering(543) 00:14:49.532 fused_ordering(544) 00:14:49.532 fused_ordering(545) 00:14:49.532 fused_ordering(546) 00:14:49.532 fused_ordering(547) 00:14:49.532 fused_ordering(548) 00:14:49.532 fused_ordering(549) 00:14:49.532 fused_ordering(550) 00:14:49.532 fused_ordering(551) 00:14:49.532 fused_ordering(552) 00:14:49.532 fused_ordering(553) 00:14:49.533 fused_ordering(554) 00:14:49.533 fused_ordering(555) 00:14:49.533 fused_ordering(556) 00:14:49.533 fused_ordering(557) 00:14:49.533 fused_ordering(558) 00:14:49.533 fused_ordering(559) 00:14:49.533 fused_ordering(560) 00:14:49.533 fused_ordering(561) 00:14:49.533 fused_ordering(562) 00:14:49.533 fused_ordering(563) 00:14:49.533 fused_ordering(564) 00:14:49.533 fused_ordering(565) 00:14:49.533 fused_ordering(566) 00:14:49.533 fused_ordering(567) 00:14:49.533 fused_ordering(568) 00:14:49.533 fused_ordering(569) 00:14:49.533 fused_ordering(570) 00:14:49.533 fused_ordering(571) 00:14:49.533 fused_ordering(572) 00:14:49.533 fused_ordering(573) 00:14:49.533 fused_ordering(574) 00:14:49.533 fused_ordering(575) 00:14:49.533 fused_ordering(576) 00:14:49.533 fused_ordering(577) 00:14:49.533 fused_ordering(578) 00:14:49.533 fused_ordering(579) 00:14:49.533 fused_ordering(580) 00:14:49.533 fused_ordering(581) 00:14:49.533 fused_ordering(582) 00:14:49.533 fused_ordering(583) 00:14:49.533 fused_ordering(584) 00:14:49.533 fused_ordering(585) 00:14:49.533 fused_ordering(586) 00:14:49.533 fused_ordering(587) 00:14:49.533 fused_ordering(588) 00:14:49.533 fused_ordering(589) 00:14:49.533 fused_ordering(590) 00:14:49.533 fused_ordering(591) 00:14:49.533 fused_ordering(592) 00:14:49.533 fused_ordering(593) 00:14:49.533 fused_ordering(594) 00:14:49.533 fused_ordering(595) 00:14:49.533 fused_ordering(596) 00:14:49.533 fused_ordering(597) 00:14:49.533 fused_ordering(598) 00:14:49.533 fused_ordering(599) 00:14:49.533 fused_ordering(600) 00:14:49.533 fused_ordering(601) 00:14:49.533 fused_ordering(602) 00:14:49.533 fused_ordering(603) 00:14:49.533 fused_ordering(604) 00:14:49.533 fused_ordering(605) 00:14:49.533 fused_ordering(606) 00:14:49.533 fused_ordering(607) 00:14:49.533 fused_ordering(608) 00:14:49.533 fused_ordering(609) 00:14:49.533 fused_ordering(610) 00:14:49.533 fused_ordering(611) 00:14:49.533 fused_ordering(612) 00:14:49.533 fused_ordering(613) 00:14:49.533 fused_ordering(614) 00:14:49.533 fused_ordering(615) 00:14:50.475 fused_ordering(616) 00:14:50.475 fused_ordering(617) 00:14:50.475 fused_ordering(618) 00:14:50.475 fused_ordering(619) 00:14:50.475 fused_ordering(620) 00:14:50.475 fused_ordering(621) 00:14:50.475 fused_ordering(622) 00:14:50.475 fused_ordering(623) 00:14:50.475 fused_ordering(624) 00:14:50.475 fused_ordering(625) 00:14:50.475 fused_ordering(626) 00:14:50.475 fused_ordering(627) 00:14:50.475 fused_ordering(628) 00:14:50.475 fused_ordering(629) 00:14:50.475 fused_ordering(630) 00:14:50.475 fused_ordering(631) 00:14:50.475 fused_ordering(632) 00:14:50.475 fused_ordering(633) 00:14:50.475 fused_ordering(634) 00:14:50.475 fused_ordering(635) 00:14:50.475 fused_ordering(636) 00:14:50.475 fused_ordering(637) 00:14:50.475 fused_ordering(638) 00:14:50.475 fused_ordering(639) 00:14:50.475 fused_ordering(640) 00:14:50.475 fused_ordering(641) 00:14:50.475 fused_ordering(642) 00:14:50.475 fused_ordering(643) 00:14:50.475 fused_ordering(644) 00:14:50.475 fused_ordering(645) 00:14:50.475 fused_ordering(646) 00:14:50.475 fused_ordering(647) 00:14:50.475 fused_ordering(648) 00:14:50.475 fused_ordering(649) 00:14:50.475 fused_ordering(650) 00:14:50.475 fused_ordering(651) 00:14:50.475 fused_ordering(652) 00:14:50.475 fused_ordering(653) 00:14:50.475 fused_ordering(654) 00:14:50.475 fused_ordering(655) 00:14:50.475 fused_ordering(656) 00:14:50.475 fused_ordering(657) 00:14:50.475 fused_ordering(658) 00:14:50.475 fused_ordering(659) 00:14:50.475 fused_ordering(660) 00:14:50.475 fused_ordering(661) 00:14:50.475 fused_ordering(662) 00:14:50.475 fused_ordering(663) 00:14:50.475 fused_ordering(664) 00:14:50.475 fused_ordering(665) 00:14:50.475 fused_ordering(666) 00:14:50.475 fused_ordering(667) 00:14:50.475 fused_ordering(668) 00:14:50.475 fused_ordering(669) 00:14:50.475 fused_ordering(670) 00:14:50.475 fused_ordering(671) 00:14:50.475 fused_ordering(672) 00:14:50.475 fused_ordering(673) 00:14:50.475 fused_ordering(674) 00:14:50.475 fused_ordering(675) 00:14:50.475 fused_ordering(676) 00:14:50.475 fused_ordering(677) 00:14:50.475 fused_ordering(678) 00:14:50.475 fused_ordering(679) 00:14:50.475 fused_ordering(680) 00:14:50.475 fused_ordering(681) 00:14:50.475 fused_ordering(682) 00:14:50.475 fused_ordering(683) 00:14:50.475 fused_ordering(684) 00:14:50.475 fused_ordering(685) 00:14:50.475 fused_ordering(686) 00:14:50.475 fused_ordering(687) 00:14:50.475 fused_ordering(688) 00:14:50.475 fused_ordering(689) 00:14:50.475 fused_ordering(690) 00:14:50.475 fused_ordering(691) 00:14:50.475 fused_ordering(692) 00:14:50.475 fused_ordering(693) 00:14:50.475 fused_ordering(694) 00:14:50.475 fused_ordering(695) 00:14:50.475 fused_ordering(696) 00:14:50.475 fused_ordering(697) 00:14:50.475 fused_ordering(698) 00:14:50.475 fused_ordering(699) 00:14:50.475 fused_ordering(700) 00:14:50.475 fused_ordering(701) 00:14:50.475 fused_ordering(702) 00:14:50.475 fused_ordering(703) 00:14:50.475 fused_ordering(704) 00:14:50.475 fused_ordering(705) 00:14:50.475 fused_ordering(706) 00:14:50.475 fused_ordering(707) 00:14:50.475 fused_ordering(708) 00:14:50.475 fused_ordering(709) 00:14:50.475 fused_ordering(710) 00:14:50.475 fused_ordering(711) 00:14:50.475 fused_ordering(712) 00:14:50.475 fused_ordering(713) 00:14:50.475 fused_ordering(714) 00:14:50.475 fused_ordering(715) 00:14:50.475 fused_ordering(716) 00:14:50.475 fused_ordering(717) 00:14:50.475 fused_ordering(718) 00:14:50.475 fused_ordering(719) 00:14:50.475 fused_ordering(720) 00:14:50.475 fused_ordering(721) 00:14:50.475 fused_ordering(722) 00:14:50.475 fused_ordering(723) 00:14:50.475 fused_ordering(724) 00:14:50.475 fused_ordering(725) 00:14:50.475 fused_ordering(726) 00:14:50.475 fused_ordering(727) 00:14:50.475 fused_ordering(728) 00:14:50.475 fused_ordering(729) 00:14:50.475 fused_ordering(730) 00:14:50.475 fused_ordering(731) 00:14:50.475 fused_ordering(732) 00:14:50.475 fused_ordering(733) 00:14:50.475 fused_ordering(734) 00:14:50.475 fused_ordering(735) 00:14:50.475 fused_ordering(736) 00:14:50.475 fused_ordering(737) 00:14:50.475 fused_ordering(738) 00:14:50.475 fused_ordering(739) 00:14:50.475 fused_ordering(740) 00:14:50.475 fused_ordering(741) 00:14:50.475 fused_ordering(742) 00:14:50.475 fused_ordering(743) 00:14:50.475 fused_ordering(744) 00:14:50.475 fused_ordering(745) 00:14:50.475 fused_ordering(746) 00:14:50.476 fused_ordering(747) 00:14:50.476 fused_ordering(748) 00:14:50.476 fused_ordering(749) 00:14:50.476 fused_ordering(750) 00:14:50.476 fused_ordering(751) 00:14:50.476 fused_ordering(752) 00:14:50.476 fused_ordering(753) 00:14:50.476 fused_ordering(754) 00:14:50.476 fused_ordering(755) 00:14:50.476 fused_ordering(756) 00:14:50.476 fused_ordering(757) 00:14:50.476 fused_ordering(758) 00:14:50.476 fused_ordering(759) 00:14:50.476 fused_ordering(760) 00:14:50.476 fused_ordering(761) 00:14:50.476 fused_ordering(762) 00:14:50.476 fused_ordering(763) 00:14:50.476 fused_ordering(764) 00:14:50.476 fused_ordering(765) 00:14:50.476 fused_ordering(766) 00:14:50.476 fused_ordering(767) 00:14:50.476 fused_ordering(768) 00:14:50.476 fused_ordering(769) 00:14:50.476 fused_ordering(770) 00:14:50.476 fused_ordering(771) 00:14:50.476 fused_ordering(772) 00:14:50.476 fused_ordering(773) 00:14:50.476 fused_ordering(774) 00:14:50.476 fused_ordering(775) 00:14:50.476 fused_ordering(776) 00:14:50.476 fused_ordering(777) 00:14:50.476 fused_ordering(778) 00:14:50.476 fused_ordering(779) 00:14:50.476 fused_ordering(780) 00:14:50.476 fused_ordering(781) 00:14:50.476 fused_ordering(782) 00:14:50.476 fused_ordering(783) 00:14:50.476 fused_ordering(784) 00:14:50.476 fused_ordering(785) 00:14:50.476 fused_ordering(786) 00:14:50.476 fused_ordering(787) 00:14:50.476 fused_ordering(788) 00:14:50.476 fused_ordering(789) 00:14:50.476 fused_ordering(790) 00:14:50.476 fused_ordering(791) 00:14:50.476 fused_ordering(792) 00:14:50.476 fused_ordering(793) 00:14:50.476 fused_ordering(794) 00:14:50.476 fused_ordering(795) 00:14:50.476 fused_ordering(796) 00:14:50.476 fused_ordering(797) 00:14:50.476 fused_ordering(798) 00:14:50.476 fused_ordering(799) 00:14:50.476 fused_ordering(800) 00:14:50.476 fused_ordering(801) 00:14:50.476 fused_ordering(802) 00:14:50.476 fused_ordering(803) 00:14:50.476 fused_ordering(804) 00:14:50.476 fused_ordering(805) 00:14:50.476 fused_ordering(806) 00:14:50.476 fused_ordering(807) 00:14:50.476 fused_ordering(808) 00:14:50.476 fused_ordering(809) 00:14:50.476 fused_ordering(810) 00:14:50.476 fused_ordering(811) 00:14:50.476 fused_ordering(812) 00:14:50.476 fused_ordering(813) 00:14:50.476 fused_ordering(814) 00:14:50.476 fused_ordering(815) 00:14:50.476 fused_ordering(816) 00:14:50.476 fused_ordering(817) 00:14:50.476 fused_ordering(818) 00:14:50.476 fused_ordering(819) 00:14:50.476 fused_ordering(820) 00:14:51.048 fused_ordering(821) 00:14:51.048 fused_ordering(822) 00:14:51.048 fused_ordering(823) 00:14:51.048 fused_ordering(824) 00:14:51.048 fused_ordering(825) 00:14:51.048 fused_ordering(826) 00:14:51.048 fused_ordering(827) 00:14:51.048 fused_ordering(828) 00:14:51.048 fused_ordering(829) 00:14:51.048 fused_ordering(830) 00:14:51.048 fused_ordering(831) 00:14:51.048 fused_ordering(832) 00:14:51.048 fused_ordering(833) 00:14:51.048 fused_ordering(834) 00:14:51.048 fused_ordering(835) 00:14:51.048 fused_ordering(836) 00:14:51.048 fused_ordering(837) 00:14:51.048 fused_ordering(838) 00:14:51.048 fused_ordering(839) 00:14:51.048 fused_ordering(840) 00:14:51.048 fused_ordering(841) 00:14:51.048 fused_ordering(842) 00:14:51.048 fused_ordering(843) 00:14:51.048 fused_ordering(844) 00:14:51.048 fused_ordering(845) 00:14:51.048 fused_ordering(846) 00:14:51.048 fused_ordering(847) 00:14:51.048 fused_ordering(848) 00:14:51.048 fused_ordering(849) 00:14:51.048 fused_ordering(850) 00:14:51.048 fused_ordering(851) 00:14:51.048 fused_ordering(852) 00:14:51.048 fused_ordering(853) 00:14:51.048 fused_ordering(854) 00:14:51.048 fused_ordering(855) 00:14:51.048 fused_ordering(856) 00:14:51.048 fused_ordering(857) 00:14:51.048 fused_ordering(858) 00:14:51.048 fused_ordering(859) 00:14:51.048 fused_ordering(860) 00:14:51.048 fused_ordering(861) 00:14:51.048 fused_ordering(862) 00:14:51.048 fused_ordering(863) 00:14:51.048 fused_ordering(864) 00:14:51.048 fused_ordering(865) 00:14:51.048 fused_ordering(866) 00:14:51.048 fused_ordering(867) 00:14:51.048 fused_ordering(868) 00:14:51.048 fused_ordering(869) 00:14:51.048 fused_ordering(870) 00:14:51.048 fused_ordering(871) 00:14:51.048 fused_ordering(872) 00:14:51.048 fused_ordering(873) 00:14:51.048 fused_ordering(874) 00:14:51.048 fused_ordering(875) 00:14:51.048 fused_ordering(876) 00:14:51.048 fused_ordering(877) 00:14:51.048 fused_ordering(878) 00:14:51.048 fused_ordering(879) 00:14:51.048 fused_ordering(880) 00:14:51.048 fused_ordering(881) 00:14:51.048 fused_ordering(882) 00:14:51.048 fused_ordering(883) 00:14:51.048 fused_ordering(884) 00:14:51.048 fused_ordering(885) 00:14:51.048 fused_ordering(886) 00:14:51.048 fused_ordering(887) 00:14:51.048 fused_ordering(888) 00:14:51.048 fused_ordering(889) 00:14:51.048 fused_ordering(890) 00:14:51.048 fused_ordering(891) 00:14:51.048 fused_ordering(892) 00:14:51.048 fused_ordering(893) 00:14:51.048 fused_ordering(894) 00:14:51.048 fused_ordering(895) 00:14:51.048 fused_ordering(896) 00:14:51.048 fused_ordering(897) 00:14:51.048 fused_ordering(898) 00:14:51.048 fused_ordering(899) 00:14:51.048 fused_ordering(900) 00:14:51.048 fused_ordering(901) 00:14:51.048 fused_ordering(902) 00:14:51.048 fused_ordering(903) 00:14:51.048 fused_ordering(904) 00:14:51.048 fused_ordering(905) 00:14:51.048 fused_ordering(906) 00:14:51.048 fused_ordering(907) 00:14:51.048 fused_ordering(908) 00:14:51.048 fused_ordering(909) 00:14:51.048 fused_ordering(910) 00:14:51.048 fused_ordering(911) 00:14:51.048 fused_ordering(912) 00:14:51.048 fused_ordering(913) 00:14:51.048 fused_ordering(914) 00:14:51.048 fused_ordering(915) 00:14:51.048 fused_ordering(916) 00:14:51.048 fused_ordering(917) 00:14:51.048 fused_ordering(918) 00:14:51.048 fused_ordering(919) 00:14:51.048 fused_ordering(920) 00:14:51.048 fused_ordering(921) 00:14:51.048 fused_ordering(922) 00:14:51.048 fused_ordering(923) 00:14:51.048 fused_ordering(924) 00:14:51.048 fused_ordering(925) 00:14:51.048 fused_ordering(926) 00:14:51.048 fused_ordering(927) 00:14:51.048 fused_ordering(928) 00:14:51.048 fused_ordering(929) 00:14:51.048 fused_ordering(930) 00:14:51.048 fused_ordering(931) 00:14:51.048 fused_ordering(932) 00:14:51.048 fused_ordering(933) 00:14:51.048 fused_ordering(934) 00:14:51.048 fused_ordering(935) 00:14:51.048 fused_ordering(936) 00:14:51.048 fused_ordering(937) 00:14:51.048 fused_ordering(938) 00:14:51.048 fused_ordering(939) 00:14:51.048 fused_ordering(940) 00:14:51.048 fused_ordering(941) 00:14:51.048 fused_ordering(942) 00:14:51.048 fused_ordering(943) 00:14:51.048 fused_ordering(944) 00:14:51.048 fused_ordering(945) 00:14:51.048 fused_ordering(946) 00:14:51.048 fused_ordering(947) 00:14:51.048 fused_ordering(948) 00:14:51.048 fused_ordering(949) 00:14:51.048 fused_ordering(950) 00:14:51.048 fused_ordering(951) 00:14:51.048 fused_ordering(952) 00:14:51.048 fused_ordering(953) 00:14:51.048 fused_ordering(954) 00:14:51.048 fused_ordering(955) 00:14:51.048 fused_ordering(956) 00:14:51.048 fused_ordering(957) 00:14:51.048 fused_ordering(958) 00:14:51.048 fused_ordering(959) 00:14:51.048 fused_ordering(960) 00:14:51.048 fused_ordering(961) 00:14:51.048 fused_ordering(962) 00:14:51.048 fused_ordering(963) 00:14:51.048 fused_ordering(964) 00:14:51.048 fused_ordering(965) 00:14:51.048 fused_ordering(966) 00:14:51.048 fused_ordering(967) 00:14:51.048 fused_ordering(968) 00:14:51.048 fused_ordering(969) 00:14:51.048 fused_ordering(970) 00:14:51.048 fused_ordering(971) 00:14:51.048 fused_ordering(972) 00:14:51.048 fused_ordering(973) 00:14:51.048 fused_ordering(974) 00:14:51.048 fused_ordering(975) 00:14:51.048 fused_ordering(976) 00:14:51.049 fused_ordering(977) 00:14:51.049 fused_ordering(978) 00:14:51.049 fused_ordering(979) 00:14:51.049 fused_ordering(980) 00:14:51.049 fused_ordering(981) 00:14:51.049 fused_ordering(982) 00:14:51.049 fused_ordering(983) 00:14:51.049 fused_ordering(984) 00:14:51.049 fused_ordering(985) 00:14:51.049 fused_ordering(986) 00:14:51.049 fused_ordering(987) 00:14:51.049 fused_ordering(988) 00:14:51.049 fused_ordering(989) 00:14:51.049 fused_ordering(990) 00:14:51.049 fused_ordering(991) 00:14:51.049 fused_ordering(992) 00:14:51.049 fused_ordering(993) 00:14:51.049 fused_ordering(994) 00:14:51.049 fused_ordering(995) 00:14:51.049 fused_ordering(996) 00:14:51.049 fused_ordering(997) 00:14:51.049 fused_ordering(998) 00:14:51.049 fused_ordering(999) 00:14:51.049 fused_ordering(1000) 00:14:51.049 fused_ordering(1001) 00:14:51.049 fused_ordering(1002) 00:14:51.049 fused_ordering(1003) 00:14:51.049 fused_ordering(1004) 00:14:51.049 fused_ordering(1005) 00:14:51.049 fused_ordering(1006) 00:14:51.049 fused_ordering(1007) 00:14:51.049 fused_ordering(1008) 00:14:51.049 fused_ordering(1009) 00:14:51.049 fused_ordering(1010) 00:14:51.049 fused_ordering(1011) 00:14:51.049 fused_ordering(1012) 00:14:51.049 fused_ordering(1013) 00:14:51.049 fused_ordering(1014) 00:14:51.049 fused_ordering(1015) 00:14:51.049 fused_ordering(1016) 00:14:51.049 fused_ordering(1017) 00:14:51.049 fused_ordering(1018) 00:14:51.049 fused_ordering(1019) 00:14:51.049 fused_ordering(1020) 00:14:51.049 fused_ordering(1021) 00:14:51.049 fused_ordering(1022) 00:14:51.049 fused_ordering(1023) 00:14:51.049 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:51.049 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:51.049 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:51.049 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:51.049 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:51.049 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:51.049 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:51.049 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:51.049 rmmod nvme_tcp 00:14:51.049 rmmod nvme_fabrics 00:14:51.049 rmmod nvme_keyring 00:14:51.049 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:51.049 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:51.049 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:51.049 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3629853 ']' 00:14:51.049 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3629853 00:14:51.049 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 3629853 ']' 00:14:51.049 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 3629853 00:14:51.049 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:14:51.049 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:51.049 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3629853 00:14:51.049 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:51.049 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:51.049 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3629853' 00:14:51.049 killing process with pid 3629853 00:14:51.049 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 3629853 00:14:51.049 19:54:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 3629853 00:14:51.310 19:54:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:51.310 19:54:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:51.310 19:54:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:51.310 19:54:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:51.310 19:54:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:51.310 19:54:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.310 19:54:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:51.310 19:54:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.860 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:53.860 00:14:53.860 real 0m13.769s 00:14:53.860 user 0m7.537s 00:14:53.860 sys 0m7.561s 00:14:53.860 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:53.860 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:53.860 ************************************ 00:14:53.860 END TEST nvmf_fused_ordering 00:14:53.860 ************************************ 00:14:53.860 19:54:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:53.860 19:54:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:53.860 19:54:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:53.860 19:54:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:53.860 ************************************ 00:14:53.860 START TEST nvmf_ns_masking 00:14:53.860 ************************************ 00:14:53.860 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:53.860 * Looking for test storage... 00:14:53.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:53.860 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:53.860 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:53.860 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:53.860 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:53.860 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:53.860 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:53.860 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:53.860 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:53.860 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:53.860 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:53.860 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:53.860 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:53.860 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:53.860 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:53.860 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:53.860 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:53.860 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:53.860 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:53.860 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:53.860 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:53.860 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:53.860 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:53.860 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.860 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.861 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.861 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:53.861 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.861 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:53.861 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:53.861 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:53.861 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:53.861 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:53.861 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:53.861 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:53.861 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:53.861 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:53.861 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:53.861 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:53.861 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:53.861 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:53.861 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=cbfa521d-1e98-48c9-a703-8d0ee00931af 00:14:53.861 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:53.861 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=0dd2271c-f1b8-4724-bbd5-0643058c375e 00:14:53.861 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:53.861 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:53.861 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:53.861 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:53.861 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=dcfbc1fd-260d-4bd6-a1f0-d07a58179877 00:14:53.861 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:53.861 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:53.861 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:53.861 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:53.861 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:53.861 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:53.861 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.861 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:53.861 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.861 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:53.861 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:53.861 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:53.861 19:54:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:00.451 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:00.451 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:15:00.451 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:00.451 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:00.451 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:00.451 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:00.451 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:00.451 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:15:00.451 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:00.451 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:15:00.451 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:15:00.451 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:15:00.451 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:15:00.451 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:15:00.451 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:15:00.451 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:00.451 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:00.451 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:00.451 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:00.451 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:00.451 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:00.451 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:00.451 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:00.451 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:00.451 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:00.451 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:00.451 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:00.451 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:00.451 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:00.451 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:00.451 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:00.452 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:00.452 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:00.452 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:00.452 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:00.452 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:00.714 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:00.714 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:00.714 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:00.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:00.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:15:00.714 00:15:00.714 --- 10.0.0.2 ping statistics --- 00:15:00.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.714 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:15:00.714 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:00.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:00.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:15:00.714 00:15:00.714 --- 10.0.0.1 ping statistics --- 00:15:00.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.714 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:15:00.714 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:00.714 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:15:00.714 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:00.714 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:00.714 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:00.714 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:00.714 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:00.714 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:00.714 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:00.714 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:15:00.714 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:00.714 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:00.714 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:00.714 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3634725 00:15:00.714 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3634725 00:15:00.714 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:00.714 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 3634725 ']' 00:15:00.714 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.714 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:00.714 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.714 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:00.714 19:54:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:00.714 [2024-07-24 19:54:48.631411] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:15:00.714 [2024-07-24 19:54:48.631476] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:00.714 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.975 [2024-07-24 19:54:48.701028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.975 [2024-07-24 19:54:48.774511] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:00.975 [2024-07-24 19:54:48.774549] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:00.975 [2024-07-24 19:54:48.774557] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:00.975 [2024-07-24 19:54:48.774563] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:00.975 [2024-07-24 19:54:48.774569] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:00.975 [2024-07-24 19:54:48.774587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.593 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:01.593 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:15:01.593 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:01.593 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:01.593 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:01.593 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:01.593 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:01.854 [2024-07-24 19:54:49.577721] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:01.854 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:15:01.854 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:15:01.854 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:01.854 Malloc1 00:15:01.854 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:02.114 Malloc2 00:15:02.114 19:54:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:02.375 19:54:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:02.375 19:54:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:02.636 [2024-07-24 19:54:50.353944] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:02.636 19:54:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:02.636 19:54:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dcfbc1fd-260d-4bd6-a1f0-d07a58179877 -a 10.0.0.2 -s 4420 -i 4 00:15:02.636 19:54:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:02.636 19:54:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:02.636 19:54:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:02.636 19:54:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:02.636 19:54:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:04.552 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:04.552 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:04.552 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:04.813 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:04.813 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:04.813 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:04.813 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:04.813 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:04.813 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:04.813 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:04.813 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:15:04.813 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:04.813 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:04.813 [ 0]:0x1 00:15:04.813 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:04.813 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:04.813 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=17b4530cf07b4b0e859bd5c45c641853 00:15:04.813 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 17b4530cf07b4b0e859bd5c45c641853 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:04.813 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:05.074 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:15:05.074 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:05.074 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:05.074 [ 0]:0x1 00:15:05.074 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:05.074 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:05.074 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=17b4530cf07b4b0e859bd5c45c641853 00:15:05.075 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 17b4530cf07b4b0e859bd5c45c641853 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:05.075 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:15:05.075 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:05.075 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:05.075 [ 1]:0x2 00:15:05.075 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:05.075 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:05.075 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=198a49cd38fe49d085fe4ebe3590d240 00:15:05.075 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 198a49cd38fe49d085fe4ebe3590d240 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:05.075 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:15:05.075 19:54:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:05.075 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.075 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:05.336 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:05.597 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:15:05.597 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dcfbc1fd-260d-4bd6-a1f0-d07a58179877 -a 10.0.0.2 -s 4420 -i 4 00:15:05.597 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:05.597 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:05.597 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:05.597 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:15:05.597 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:15:05.597 19:54:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:08.142 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:08.142 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:08.142 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:08.142 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:08.142 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:08.142 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:08.142 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:08.142 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:08.142 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:08.142 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:08.142 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:15:08.142 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:08.142 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:08.142 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:08.142 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:08.142 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:08.142 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:08.142 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:08.142 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:08.142 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:08.142 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:08.142 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:08.142 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:08.142 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:08.142 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:08.142 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:08.142 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:08.142 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:08.142 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:15:08.142 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:08.142 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:08.142 [ 0]:0x2 00:15:08.142 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:08.142 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:08.142 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=198a49cd38fe49d085fe4ebe3590d240 00:15:08.142 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 198a49cd38fe49d085fe4ebe3590d240 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:08.142 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:08.142 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:15:08.142 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:08.142 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:08.142 [ 0]:0x1 00:15:08.142 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:08.142 19:54:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:08.142 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=17b4530cf07b4b0e859bd5c45c641853 00:15:08.142 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 17b4530cf07b4b0e859bd5c45c641853 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:08.142 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:15:08.142 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:08.142 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:08.142 [ 1]:0x2 00:15:08.142 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:08.142 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:08.142 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=198a49cd38fe49d085fe4ebe3590d240 00:15:08.142 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 198a49cd38fe49d085fe4ebe3590d240 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:08.142 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:08.404 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:15:08.404 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:08.404 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:08.404 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:08.404 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:08.404 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:08.404 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:08.404 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:08.404 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:08.404 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:08.404 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:08.404 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:08.404 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:08.404 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:08.404 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:08.404 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:08.404 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:08.404 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:08.404 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:15:08.404 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:08.404 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:08.404 [ 0]:0x2 00:15:08.404 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:08.404 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:08.665 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=198a49cd38fe49d085fe4ebe3590d240 00:15:08.665 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 198a49cd38fe49d085fe4ebe3590d240 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:08.665 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:15:08.665 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:08.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.665 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:08.665 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:15:08.665 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dcfbc1fd-260d-4bd6-a1f0-d07a58179877 -a 10.0.0.2 -s 4420 -i 4 00:15:08.926 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:08.926 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:08.926 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:08.926 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:15:08.926 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:15:08.926 19:54:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:10.909 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:10.909 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:10.909 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:10.909 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:15:10.909 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:10.909 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:10.909 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:10.909 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:11.170 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:11.170 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:11.170 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:11.170 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:11.170 19:54:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:11.170 [ 0]:0x1 00:15:11.170 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:11.170 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:11.170 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=17b4530cf07b4b0e859bd5c45c641853 00:15:11.170 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 17b4530cf07b4b0e859bd5c45c641853 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:11.170 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:11.170 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:11.170 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:11.170 [ 1]:0x2 00:15:11.170 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:11.170 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:11.431 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=198a49cd38fe49d085fe4ebe3590d240 00:15:11.431 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 198a49cd38fe49d085fe4ebe3590d240 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:11.431 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:11.431 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:11.431 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:11.431 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:11.431 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:11.431 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:11.431 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:11.431 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:11.431 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:11.431 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:11.431 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:11.431 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:11.431 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:11.431 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:11.431 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:11.431 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:11.432 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:11.432 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:11.432 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:11.432 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:11.693 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:11.693 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:11.693 [ 0]:0x2 00:15:11.693 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:11.693 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:11.693 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=198a49cd38fe49d085fe4ebe3590d240 00:15:11.693 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 198a49cd38fe49d085fe4ebe3590d240 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:11.693 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:11.693 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:11.693 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:11.693 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:11.693 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:11.693 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:11.693 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:11.693 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:11.693 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:11.693 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:11.693 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:11.693 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:11.693 [2024-07-24 19:54:59.588403] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:11.693 request: 00:15:11.693 { 00:15:11.693 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.693 "nsid": 2, 00:15:11.693 "host": "nqn.2016-06.io.spdk:host1", 00:15:11.693 "method": "nvmf_ns_remove_host", 00:15:11.693 "req_id": 1 00:15:11.693 } 00:15:11.693 Got JSON-RPC error response 00:15:11.693 response: 00:15:11.693 { 00:15:11.693 "code": -32602, 00:15:11.693 "message": "Invalid parameters" 00:15:11.693 } 00:15:11.693 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:11.693 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:11.693 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:11.693 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:11.693 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:11.693 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:11.693 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:11.693 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:11.693 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:11.693 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:11.693 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:11.693 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:11.693 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:11.693 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:11.693 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:11.693 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:11.954 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:11.954 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:11.954 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:11.954 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:11.954 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:11.954 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:11.954 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:11.954 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:11.954 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:11.954 [ 0]:0x2 00:15:11.954 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:11.954 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:11.954 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=198a49cd38fe49d085fe4ebe3590d240 00:15:11.954 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 198a49cd38fe49d085fe4ebe3590d240 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:11.954 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:11.954 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:11.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.954 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3637055 00:15:11.954 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:11.954 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:11.954 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3637055 /var/tmp/host.sock 00:15:11.954 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 3637055 ']' 00:15:11.954 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:15:11.954 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:11.955 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:11.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:11.955 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:11.955 19:54:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:11.955 [2024-07-24 19:54:59.844667] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:15:11.955 [2024-07-24 19:54:59.844717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3637055 ] 00:15:11.955 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.215 [2024-07-24 19:54:59.921435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.215 [2024-07-24 19:54:59.985065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.787 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:12.787 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:15:12.787 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:13.048 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:13.048 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid cbfa521d-1e98-48c9-a703-8d0ee00931af 00:15:13.048 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:15:13.048 19:55:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g CBFA521D1E9848C9A7038D0EE00931AF -i 00:15:13.309 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 0dd2271c-f1b8-4724-bbd5-0643058c375e 00:15:13.309 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:15:13.309 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 0DD2271CF1B84724BBD50643058C375E -i 00:15:13.309 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:13.570 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:13.831 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:13.831 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:14.092 nvme0n1 00:15:14.092 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:14.092 19:55:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:14.092 nvme1n2 00:15:14.353 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:14.353 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:14.353 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:14.353 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:14.353 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:14.353 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:14.353 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:14.353 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:14.353 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:14.613 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ cbfa521d-1e98-48c9-a703-8d0ee00931af == \c\b\f\a\5\2\1\d\-\1\e\9\8\-\4\8\c\9\-\a\7\0\3\-\8\d\0\e\e\0\0\9\3\1\a\f ]] 00:15:14.613 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:14.613 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:14.613 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:14.613 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 0dd2271c-f1b8-4724-bbd5-0643058c375e == \0\d\d\2\2\7\1\c\-\f\1\b\8\-\4\7\2\4\-\b\b\d\5\-\0\6\4\3\0\5\8\c\3\7\5\e ]] 00:15:14.613 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 3637055 00:15:14.613 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 3637055 ']' 00:15:14.613 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 3637055 00:15:14.613 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:15:14.613 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:14.873 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3637055 00:15:14.873 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:14.873 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:14.873 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3637055' 00:15:14.873 killing process with pid 3637055 00:15:14.873 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 3637055 00:15:14.873 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 3637055 00:15:14.873 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:15.133 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:15:15.133 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:15:15.133 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:15.133 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:15:15.133 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:15.133 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:15:15.133 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:15.133 19:55:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:15.133 rmmod nvme_tcp 00:15:15.133 rmmod nvme_fabrics 00:15:15.133 rmmod nvme_keyring 00:15:15.133 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:15.133 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:15:15.133 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:15:15.133 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3634725 ']' 00:15:15.133 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3634725 00:15:15.133 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 3634725 ']' 00:15:15.133 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 3634725 00:15:15.133 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:15:15.133 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:15.133 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3634725 00:15:15.395 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:15.395 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:15.395 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3634725' 00:15:15.395 killing process with pid 3634725 00:15:15.395 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 3634725 00:15:15.395 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 3634725 00:15:15.395 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:15.395 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:15.395 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:15.395 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:15.395 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:15.395 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.395 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:15.395 19:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:17.939 00:15:17.939 real 0m24.046s 00:15:17.939 user 0m23.897s 00:15:17.939 sys 0m7.178s 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:17.939 ************************************ 00:15:17.939 END TEST nvmf_ns_masking 00:15:17.939 ************************************ 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:17.939 ************************************ 00:15:17.939 START TEST nvmf_nvme_cli 00:15:17.939 ************************************ 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:17.939 * Looking for test storage... 00:15:17.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:17.939 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:17.940 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:17.940 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:17.940 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:17.940 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:17.940 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:17.940 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:17.940 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:17.940 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:17.940 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:17.940 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:17.940 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:17.940 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:17.940 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.940 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:17.940 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.940 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:17.940 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:17.940 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:15:17.940 19:55:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:24.545 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:24.545 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:24.545 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:24.545 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:24.545 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:24.808 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:24.808 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:24.808 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:24.808 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:24.808 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:24.808 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:24.808 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:24.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:24.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:15:24.808 00:15:24.808 --- 10.0.0.2 ping statistics --- 00:15:24.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.808 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:15:24.808 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:24.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:24.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.384 ms 00:15:24.808 00:15:24.808 --- 10.0.0.1 ping statistics --- 00:15:24.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.808 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:15:24.808 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:24.808 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:15:24.808 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:24.808 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:24.808 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:24.808 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:24.808 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:24.808 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:24.808 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:24.808 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:24.808 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:24.808 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:24.808 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:24.808 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3641912 00:15:24.808 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3641912 00:15:24.809 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:24.809 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 3641912 ']' 00:15:24.809 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.809 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:24.809 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.809 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:24.809 19:55:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:25.070 [2024-07-24 19:55:12.781223] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:15:25.070 [2024-07-24 19:55:12.781290] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:25.070 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.070 [2024-07-24 19:55:12.853513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:25.070 [2024-07-24 19:55:12.929973] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:25.070 [2024-07-24 19:55:12.930016] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:25.070 [2024-07-24 19:55:12.930024] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:25.070 [2024-07-24 19:55:12.930030] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:25.070 [2024-07-24 19:55:12.930036] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:25.070 [2024-07-24 19:55:12.930171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.070 [2024-07-24 19:55:12.930305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:25.070 [2024-07-24 19:55:12.930513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:25.070 [2024-07-24 19:55:12.930519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.641 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:25.641 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:15:25.641 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:25.641 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:25.641 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:25.902 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:25.902 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:25.902 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.902 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:25.902 [2024-07-24 19:55:13.613062] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:25.902 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.902 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:25.902 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.902 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:25.902 Malloc0 00:15:25.902 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.902 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:25.902 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.902 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:25.902 Malloc1 00:15:25.902 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.902 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:25.902 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.902 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:25.902 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.902 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:25.902 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.902 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:25.902 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.902 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:25.902 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.902 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:25.902 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.902 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:25.902 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.902 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:25.902 [2024-07-24 19:55:13.702900] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:25.902 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.902 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:25.902 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.902 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:25.902 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.902 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:15:25.902 00:15:25.902 Discovery Log Number of Records 2, Generation counter 2 00:15:25.902 =====Discovery Log Entry 0====== 00:15:25.902 trtype: tcp 00:15:25.902 adrfam: ipv4 00:15:25.902 subtype: current discovery subsystem 00:15:25.902 treq: not required 00:15:25.902 portid: 0 00:15:25.902 trsvcid: 4420 00:15:25.902 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:25.902 traddr: 10.0.0.2 00:15:25.902 eflags: explicit discovery connections, duplicate discovery information 00:15:25.902 sectype: none 00:15:25.902 =====Discovery Log Entry 1====== 00:15:25.902 trtype: tcp 00:15:25.902 adrfam: ipv4 00:15:25.902 subtype: nvme subsystem 00:15:25.902 treq: not required 00:15:25.902 portid: 0 00:15:25.902 trsvcid: 4420 00:15:25.902 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:25.902 traddr: 10.0.0.2 00:15:25.902 eflags: none 00:15:25.902 sectype: none 00:15:25.902 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:26.162 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:26.162 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:26.162 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:26.162 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:26.162 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:26.162 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:26.162 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:26.163 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:26.163 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:26.163 19:55:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:27.549 19:55:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:27.549 19:55:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:15:27.549 19:55:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:27.549 19:55:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:15:27.549 19:55:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:15:27.549 19:55:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:15:30.098 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:30.098 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:30.098 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:30.098 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:15:30.098 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:30.098 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:15:30.098 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:30.098 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:30.098 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:30.098 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:30.098 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:30.098 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:30.098 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:30.098 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:30.098 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:30.098 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:30.098 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:30.098 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:30.098 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:30.098 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:30.098 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:30.098 /dev/nvme0n1 ]] 00:15:30.098 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:30.098 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:30.098 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:30.098 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:30.098 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:30.098 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:30.098 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:30.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:30.099 rmmod nvme_tcp 00:15:30.099 rmmod nvme_fabrics 00:15:30.099 rmmod nvme_keyring 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3641912 ']' 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3641912 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 3641912 ']' 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 3641912 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3641912 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3641912' 00:15:30.099 killing process with pid 3641912 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 3641912 00:15:30.099 19:55:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 3641912 00:15:30.099 19:55:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:30.099 19:55:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:30.099 19:55:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:30.099 19:55:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:30.099 19:55:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:30.099 19:55:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.099 19:55:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:30.099 19:55:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.646 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:32.647 00:15:32.647 real 0m14.652s 00:15:32.647 user 0m22.201s 00:15:32.647 sys 0m5.962s 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:32.647 ************************************ 00:15:32.647 END TEST nvmf_nvme_cli 00:15:32.647 ************************************ 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:32.647 ************************************ 00:15:32.647 START TEST nvmf_vfio_user 00:15:32.647 ************************************ 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:32.647 * Looking for test storage... 00:15:32.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3643547 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3643547' 00:15:32.647 Process pid: 3643547 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3643547 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 3643547 ']' 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:32.647 19:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:32.647 [2024-07-24 19:55:20.347070] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:15:32.647 [2024-07-24 19:55:20.347127] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:32.647 EAL: No free 2048 kB hugepages reported on node 1 00:15:32.647 [2024-07-24 19:55:20.409086] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:32.647 [2024-07-24 19:55:20.475694] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:32.647 [2024-07-24 19:55:20.475731] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:32.647 [2024-07-24 19:55:20.475739] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:32.647 [2024-07-24 19:55:20.475745] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:32.647 [2024-07-24 19:55:20.475751] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:32.647 [2024-07-24 19:55:20.475897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:32.647 [2024-07-24 19:55:20.476016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:32.647 [2024-07-24 19:55:20.476174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.647 [2024-07-24 19:55:20.476175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:33.220 19:55:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:33.220 19:55:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:15:33.220 19:55:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:34.626 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:34.626 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:34.626 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:34.626 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:34.626 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:34.626 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:34.626 Malloc1 00:15:34.626 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:34.904 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:34.904 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:35.165 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:35.165 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:35.165 19:55:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:35.426 Malloc2 00:15:35.426 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:35.426 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:35.686 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:35.949 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:35.949 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:35.949 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:35.949 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:35.949 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:35.949 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:35.949 [2024-07-24 19:55:23.692657] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:15:35.949 [2024-07-24 19:55:23.692702] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3644242 ] 00:15:35.949 EAL: No free 2048 kB hugepages reported on node 1 00:15:35.949 [2024-07-24 19:55:23.723824] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:35.949 [2024-07-24 19:55:23.732519] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:35.949 [2024-07-24 19:55:23.732538] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fdf7911d000 00:15:35.949 [2024-07-24 19:55:23.733517] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:35.949 [2024-07-24 19:55:23.734526] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:35.949 [2024-07-24 19:55:23.735528] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:35.949 [2024-07-24 19:55:23.736536] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:35.949 [2024-07-24 19:55:23.737540] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:35.949 [2024-07-24 19:55:23.738542] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:35.949 [2024-07-24 19:55:23.739552] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:35.949 [2024-07-24 19:55:23.740560] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:35.949 [2024-07-24 19:55:23.741565] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:35.949 [2024-07-24 19:55:23.741573] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fdf79112000 00:15:35.949 [2024-07-24 19:55:23.742898] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:35.949 [2024-07-24 19:55:23.763812] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:35.949 [2024-07-24 19:55:23.763837] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:35.949 [2024-07-24 19:55:23.766725] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:35.949 [2024-07-24 19:55:23.766777] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:35.949 [2024-07-24 19:55:23.766866] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:35.949 [2024-07-24 19:55:23.766881] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:35.949 [2024-07-24 19:55:23.766887] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:35.949 [2024-07-24 19:55:23.767734] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:35.949 [2024-07-24 19:55:23.767744] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:35.949 [2024-07-24 19:55:23.767751] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:35.949 [2024-07-24 19:55:23.768737] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:35.949 [2024-07-24 19:55:23.768746] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:35.949 [2024-07-24 19:55:23.768757] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:35.949 [2024-07-24 19:55:23.769745] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:35.949 [2024-07-24 19:55:23.769754] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:35.949 [2024-07-24 19:55:23.770747] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:35.950 [2024-07-24 19:55:23.770755] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:35.950 [2024-07-24 19:55:23.770760] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:35.950 [2024-07-24 19:55:23.770766] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:35.950 [2024-07-24 19:55:23.770872] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:35.950 [2024-07-24 19:55:23.770877] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:35.950 [2024-07-24 19:55:23.770882] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:35.950 [2024-07-24 19:55:23.771750] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:35.950 [2024-07-24 19:55:23.772761] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:35.950 [2024-07-24 19:55:23.773765] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:35.950 [2024-07-24 19:55:23.774764] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:35.950 [2024-07-24 19:55:23.774823] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:35.950 [2024-07-24 19:55:23.775779] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:35.950 [2024-07-24 19:55:23.775787] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:35.950 [2024-07-24 19:55:23.775791] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:35.950 [2024-07-24 19:55:23.775813] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:35.950 [2024-07-24 19:55:23.775820] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:35.950 [2024-07-24 19:55:23.775834] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:35.950 [2024-07-24 19:55:23.775839] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:35.950 [2024-07-24 19:55:23.775843] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:35.950 [2024-07-24 19:55:23.775856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:35.950 [2024-07-24 19:55:23.775890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:35.950 [2024-07-24 19:55:23.775900] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:35.950 [2024-07-24 19:55:23.775905] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:35.950 [2024-07-24 19:55:23.775909] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:35.950 [2024-07-24 19:55:23.775914] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:35.950 [2024-07-24 19:55:23.775918] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:35.950 [2024-07-24 19:55:23.775923] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:35.950 [2024-07-24 19:55:23.775927] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:35.950 [2024-07-24 19:55:23.775935] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:35.950 [2024-07-24 19:55:23.775947] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:35.950 [2024-07-24 19:55:23.775956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:35.950 [2024-07-24 19:55:23.775970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:35.950 [2024-07-24 19:55:23.775979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:35.950 [2024-07-24 19:55:23.775987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:35.950 [2024-07-24 19:55:23.775996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:35.950 [2024-07-24 19:55:23.776000] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:35.950 [2024-07-24 19:55:23.776009] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:35.950 [2024-07-24 19:55:23.776018] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:35.950 [2024-07-24 19:55:23.776025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:35.950 [2024-07-24 19:55:23.776031] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:35.950 [2024-07-24 19:55:23.776035] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:35.950 [2024-07-24 19:55:23.776044] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:35.950 [2024-07-24 19:55:23.776050] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:35.950 [2024-07-24 19:55:23.776058] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:35.950 [2024-07-24 19:55:23.776070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:35.950 [2024-07-24 19:55:23.776132] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:35.950 [2024-07-24 19:55:23.776140] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:35.950 [2024-07-24 19:55:23.776147] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:35.950 [2024-07-24 19:55:23.776152] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:35.950 [2024-07-24 19:55:23.776155] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:35.950 [2024-07-24 19:55:23.776161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:35.950 [2024-07-24 19:55:23.776175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:35.950 [2024-07-24 19:55:23.776184] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:35.950 [2024-07-24 19:55:23.776195] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:35.950 [2024-07-24 19:55:23.776206] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:35.950 [2024-07-24 19:55:23.776213] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:35.950 [2024-07-24 19:55:23.776217] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:35.950 [2024-07-24 19:55:23.776221] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:35.950 [2024-07-24 19:55:23.776227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:35.950 [2024-07-24 19:55:23.776243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:35.950 [2024-07-24 19:55:23.776254] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:35.950 [2024-07-24 19:55:23.776262] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:35.950 [2024-07-24 19:55:23.776269] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:35.950 [2024-07-24 19:55:23.776273] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:35.950 [2024-07-24 19:55:23.776276] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:35.950 [2024-07-24 19:55:23.776282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:35.950 [2024-07-24 19:55:23.776292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:35.950 [2024-07-24 19:55:23.776299] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:35.950 [2024-07-24 19:55:23.776306] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:35.950 [2024-07-24 19:55:23.776313] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:35.950 [2024-07-24 19:55:23.776320] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:35.950 [2024-07-24 19:55:23.776325] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:35.950 [2024-07-24 19:55:23.776332] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:35.950 [2024-07-24 19:55:23.776337] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:35.950 [2024-07-24 19:55:23.776341] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:35.950 [2024-07-24 19:55:23.776346] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:35.950 [2024-07-24 19:55:23.776364] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:35.950 [2024-07-24 19:55:23.776374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:35.950 [2024-07-24 19:55:23.776385] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:35.950 [2024-07-24 19:55:23.776395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:35.951 [2024-07-24 19:55:23.776406] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:35.951 [2024-07-24 19:55:23.776418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:35.951 [2024-07-24 19:55:23.776429] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:35.951 [2024-07-24 19:55:23.776436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:35.951 [2024-07-24 19:55:23.776449] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:35.951 [2024-07-24 19:55:23.776454] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:35.951 [2024-07-24 19:55:23.776457] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:35.951 [2024-07-24 19:55:23.776461] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:35.951 [2024-07-24 19:55:23.776464] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:35.951 [2024-07-24 19:55:23.776470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:35.951 [2024-07-24 19:55:23.776478] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:35.951 [2024-07-24 19:55:23.776482] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:35.951 [2024-07-24 19:55:23.776485] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:35.951 [2024-07-24 19:55:23.776491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:35.951 [2024-07-24 19:55:23.776498] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:35.951 [2024-07-24 19:55:23.776503] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:35.951 [2024-07-24 19:55:23.776506] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:35.951 [2024-07-24 19:55:23.776512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:35.951 [2024-07-24 19:55:23.776519] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:35.951 [2024-07-24 19:55:23.776525] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:35.951 [2024-07-24 19:55:23.776528] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:35.951 [2024-07-24 19:55:23.776534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:35.951 [2024-07-24 19:55:23.776541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:35.951 [2024-07-24 19:55:23.776553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:35.951 [2024-07-24 19:55:23.776565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:35.951 [2024-07-24 19:55:23.776573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:35.951 ===================================================== 00:15:35.951 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:35.951 ===================================================== 00:15:35.951 Controller Capabilities/Features 00:15:35.951 ================================ 00:15:35.951 Vendor ID: 4e58 00:15:35.951 Subsystem Vendor ID: 4e58 00:15:35.951 Serial Number: SPDK1 00:15:35.951 Model Number: SPDK bdev Controller 00:15:35.951 Firmware Version: 24.09 00:15:35.951 Recommended Arb Burst: 6 00:15:35.951 IEEE OUI Identifier: 8d 6b 50 00:15:35.951 Multi-path I/O 00:15:35.951 May have multiple subsystem ports: Yes 00:15:35.951 May have multiple controllers: Yes 00:15:35.951 Associated with SR-IOV VF: No 00:15:35.951 Max Data Transfer Size: 131072 00:15:35.951 Max Number of Namespaces: 32 00:15:35.951 Max Number of I/O Queues: 127 00:15:35.951 NVMe Specification Version (VS): 1.3 00:15:35.951 NVMe Specification Version (Identify): 1.3 00:15:35.951 Maximum Queue Entries: 256 00:15:35.951 Contiguous Queues Required: Yes 00:15:35.951 Arbitration Mechanisms Supported 00:15:35.951 Weighted Round Robin: Not Supported 00:15:35.951 Vendor Specific: Not Supported 00:15:35.951 Reset Timeout: 15000 ms 00:15:35.951 Doorbell Stride: 4 bytes 00:15:35.951 NVM Subsystem Reset: Not Supported 00:15:35.951 Command Sets Supported 00:15:35.951 NVM Command Set: Supported 00:15:35.951 Boot Partition: Not Supported 00:15:35.951 Memory Page Size Minimum: 4096 bytes 00:15:35.951 Memory Page Size Maximum: 4096 bytes 00:15:35.951 Persistent Memory Region: Not Supported 00:15:35.951 Optional Asynchronous Events Supported 00:15:35.951 Namespace Attribute Notices: Supported 00:15:35.951 Firmware Activation Notices: Not Supported 00:15:35.951 ANA Change Notices: Not Supported 00:15:35.951 PLE Aggregate Log Change Notices: Not Supported 00:15:35.951 LBA Status Info Alert Notices: Not Supported 00:15:35.951 EGE Aggregate Log Change Notices: Not Supported 00:15:35.951 Normal NVM Subsystem Shutdown event: Not Supported 00:15:35.951 Zone Descriptor Change Notices: Not Supported 00:15:35.951 Discovery Log Change Notices: Not Supported 00:15:35.951 Controller Attributes 00:15:35.951 128-bit Host Identifier: Supported 00:15:35.951 Non-Operational Permissive Mode: Not Supported 00:15:35.951 NVM Sets: Not Supported 00:15:35.951 Read Recovery Levels: Not Supported 00:15:35.951 Endurance Groups: Not Supported 00:15:35.951 Predictable Latency Mode: Not Supported 00:15:35.951 Traffic Based Keep ALive: Not Supported 00:15:35.951 Namespace Granularity: Not Supported 00:15:35.951 SQ Associations: Not Supported 00:15:35.951 UUID List: Not Supported 00:15:35.951 Multi-Domain Subsystem: Not Supported 00:15:35.951 Fixed Capacity Management: Not Supported 00:15:35.951 Variable Capacity Management: Not Supported 00:15:35.951 Delete Endurance Group: Not Supported 00:15:35.951 Delete NVM Set: Not Supported 00:15:35.951 Extended LBA Formats Supported: Not Supported 00:15:35.951 Flexible Data Placement Supported: Not Supported 00:15:35.951 00:15:35.951 Controller Memory Buffer Support 00:15:35.951 ================================ 00:15:35.951 Supported: No 00:15:35.951 00:15:35.951 Persistent Memory Region Support 00:15:35.951 ================================ 00:15:35.951 Supported: No 00:15:35.951 00:15:35.951 Admin Command Set Attributes 00:15:35.951 ============================ 00:15:35.951 Security Send/Receive: Not Supported 00:15:35.951 Format NVM: Not Supported 00:15:35.951 Firmware Activate/Download: Not Supported 00:15:35.951 Namespace Management: Not Supported 00:15:35.951 Device Self-Test: Not Supported 00:15:35.951 Directives: Not Supported 00:15:35.951 NVMe-MI: Not Supported 00:15:35.951 Virtualization Management: Not Supported 00:15:35.951 Doorbell Buffer Config: Not Supported 00:15:35.951 Get LBA Status Capability: Not Supported 00:15:35.951 Command & Feature Lockdown Capability: Not Supported 00:15:35.951 Abort Command Limit: 4 00:15:35.951 Async Event Request Limit: 4 00:15:35.951 Number of Firmware Slots: N/A 00:15:35.951 Firmware Slot 1 Read-Only: N/A 00:15:35.951 Firmware Activation Without Reset: N/A 00:15:35.951 Multiple Update Detection Support: N/A 00:15:35.951 Firmware Update Granularity: No Information Provided 00:15:35.951 Per-Namespace SMART Log: No 00:15:35.951 Asymmetric Namespace Access Log Page: Not Supported 00:15:35.951 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:35.951 Command Effects Log Page: Supported 00:15:35.951 Get Log Page Extended Data: Supported 00:15:35.951 Telemetry Log Pages: Not Supported 00:15:35.951 Persistent Event Log Pages: Not Supported 00:15:35.951 Supported Log Pages Log Page: May Support 00:15:35.951 Commands Supported & Effects Log Page: Not Supported 00:15:35.951 Feature Identifiers & Effects Log Page:May Support 00:15:35.951 NVMe-MI Commands & Effects Log Page: May Support 00:15:35.951 Data Area 4 for Telemetry Log: Not Supported 00:15:35.951 Error Log Page Entries Supported: 128 00:15:35.951 Keep Alive: Supported 00:15:35.951 Keep Alive Granularity: 10000 ms 00:15:35.951 00:15:35.951 NVM Command Set Attributes 00:15:35.951 ========================== 00:15:35.951 Submission Queue Entry Size 00:15:35.951 Max: 64 00:15:35.951 Min: 64 00:15:35.951 Completion Queue Entry Size 00:15:35.951 Max: 16 00:15:35.951 Min: 16 00:15:35.951 Number of Namespaces: 32 00:15:35.951 Compare Command: Supported 00:15:35.951 Write Uncorrectable Command: Not Supported 00:15:35.951 Dataset Management Command: Supported 00:15:35.951 Write Zeroes Command: Supported 00:15:35.951 Set Features Save Field: Not Supported 00:15:35.951 Reservations: Not Supported 00:15:35.951 Timestamp: Not Supported 00:15:35.951 Copy: Supported 00:15:35.951 Volatile Write Cache: Present 00:15:35.951 Atomic Write Unit (Normal): 1 00:15:35.951 Atomic Write Unit (PFail): 1 00:15:35.951 Atomic Compare & Write Unit: 1 00:15:35.951 Fused Compare & Write: Supported 00:15:35.951 Scatter-Gather List 00:15:35.951 SGL Command Set: Supported (Dword aligned) 00:15:35.952 SGL Keyed: Not Supported 00:15:35.952 SGL Bit Bucket Descriptor: Not Supported 00:15:35.952 SGL Metadata Pointer: Not Supported 00:15:35.952 Oversized SGL: Not Supported 00:15:35.952 SGL Metadata Address: Not Supported 00:15:35.952 SGL Offset: Not Supported 00:15:35.952 Transport SGL Data Block: Not Supported 00:15:35.952 Replay Protected Memory Block: Not Supported 00:15:35.952 00:15:35.952 Firmware Slot Information 00:15:35.952 ========================= 00:15:35.952 Active slot: 1 00:15:35.952 Slot 1 Firmware Revision: 24.09 00:15:35.952 00:15:35.952 00:15:35.952 Commands Supported and Effects 00:15:35.952 ============================== 00:15:35.952 Admin Commands 00:15:35.952 -------------- 00:15:35.952 Get Log Page (02h): Supported 00:15:35.952 Identify (06h): Supported 00:15:35.952 Abort (08h): Supported 00:15:35.952 Set Features (09h): Supported 00:15:35.952 Get Features (0Ah): Supported 00:15:35.952 Asynchronous Event Request (0Ch): Supported 00:15:35.952 Keep Alive (18h): Supported 00:15:35.952 I/O Commands 00:15:35.952 ------------ 00:15:35.952 Flush (00h): Supported LBA-Change 00:15:35.952 Write (01h): Supported LBA-Change 00:15:35.952 Read (02h): Supported 00:15:35.952 Compare (05h): Supported 00:15:35.952 Write Zeroes (08h): Supported LBA-Change 00:15:35.952 Dataset Management (09h): Supported LBA-Change 00:15:35.952 Copy (19h): Supported LBA-Change 00:15:35.952 00:15:35.952 Error Log 00:15:35.952 ========= 00:15:35.952 00:15:35.952 Arbitration 00:15:35.952 =========== 00:15:35.952 Arbitration Burst: 1 00:15:35.952 00:15:35.952 Power Management 00:15:35.952 ================ 00:15:35.952 Number of Power States: 1 00:15:35.952 Current Power State: Power State #0 00:15:35.952 Power State #0: 00:15:35.952 Max Power: 0.00 W 00:15:35.952 Non-Operational State: Operational 00:15:35.952 Entry Latency: Not Reported 00:15:35.952 Exit Latency: Not Reported 00:15:35.952 Relative Read Throughput: 0 00:15:35.952 Relative Read Latency: 0 00:15:35.952 Relative Write Throughput: 0 00:15:35.952 Relative Write Latency: 0 00:15:35.952 Idle Power: Not Reported 00:15:35.952 Active Power: Not Reported 00:15:35.952 Non-Operational Permissive Mode: Not Supported 00:15:35.952 00:15:35.952 Health Information 00:15:35.952 ================== 00:15:35.952 Critical Warnings: 00:15:35.952 Available Spare Space: OK 00:15:35.952 Temperature: OK 00:15:35.952 Device Reliability: OK 00:15:35.952 Read Only: No 00:15:35.952 Volatile Memory Backup: OK 00:15:35.952 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:35.952 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:35.952 Available Spare: 0% 00:15:35.952 Available Sp[2024-07-24 19:55:23.776677] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:35.952 [2024-07-24 19:55:23.776686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:35.952 [2024-07-24 19:55:23.776715] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:35.952 [2024-07-24 19:55:23.776725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:35.952 [2024-07-24 19:55:23.776731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:35.952 [2024-07-24 19:55:23.776737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:35.952 [2024-07-24 19:55:23.776743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:35.952 [2024-07-24 19:55:23.776784] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:35.952 [2024-07-24 19:55:23.776794] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:35.952 [2024-07-24 19:55:23.777786] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:35.952 [2024-07-24 19:55:23.777826] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:35.952 [2024-07-24 19:55:23.777832] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:35.952 [2024-07-24 19:55:23.778795] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:35.952 [2024-07-24 19:55:23.778806] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:35.952 [2024-07-24 19:55:23.778867] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:35.952 [2024-07-24 19:55:23.783207] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:35.952 are Threshold: 0% 00:15:35.952 Life Percentage Used: 0% 00:15:35.952 Data Units Read: 0 00:15:35.952 Data Units Written: 0 00:15:35.952 Host Read Commands: 0 00:15:35.952 Host Write Commands: 0 00:15:35.952 Controller Busy Time: 0 minutes 00:15:35.952 Power Cycles: 0 00:15:35.952 Power On Hours: 0 hours 00:15:35.952 Unsafe Shutdowns: 0 00:15:35.952 Unrecoverable Media Errors: 0 00:15:35.952 Lifetime Error Log Entries: 0 00:15:35.952 Warning Temperature Time: 0 minutes 00:15:35.952 Critical Temperature Time: 0 minutes 00:15:35.952 00:15:35.952 Number of Queues 00:15:35.952 ================ 00:15:35.952 Number of I/O Submission Queues: 127 00:15:35.952 Number of I/O Completion Queues: 127 00:15:35.952 00:15:35.952 Active Namespaces 00:15:35.952 ================= 00:15:35.952 Namespace ID:1 00:15:35.952 Error Recovery Timeout: Unlimited 00:15:35.952 Command Set Identifier: NVM (00h) 00:15:35.952 Deallocate: Supported 00:15:35.952 Deallocated/Unwritten Error: Not Supported 00:15:35.952 Deallocated Read Value: Unknown 00:15:35.952 Deallocate in Write Zeroes: Not Supported 00:15:35.952 Deallocated Guard Field: 0xFFFF 00:15:35.952 Flush: Supported 00:15:35.952 Reservation: Supported 00:15:35.952 Namespace Sharing Capabilities: Multiple Controllers 00:15:35.952 Size (in LBAs): 131072 (0GiB) 00:15:35.952 Capacity (in LBAs): 131072 (0GiB) 00:15:35.952 Utilization (in LBAs): 131072 (0GiB) 00:15:35.952 NGUID: 90802E6CBAED4372A086D99190B36A80 00:15:35.952 UUID: 90802e6c-baed-4372-a086-d99190b36a80 00:15:35.952 Thin Provisioning: Not Supported 00:15:35.952 Per-NS Atomic Units: Yes 00:15:35.952 Atomic Boundary Size (Normal): 0 00:15:35.952 Atomic Boundary Size (PFail): 0 00:15:35.952 Atomic Boundary Offset: 0 00:15:35.952 Maximum Single Source Range Length: 65535 00:15:35.952 Maximum Copy Length: 65535 00:15:35.952 Maximum Source Range Count: 1 00:15:35.952 NGUID/EUI64 Never Reused: No 00:15:35.952 Namespace Write Protected: No 00:15:35.952 Number of LBA Formats: 1 00:15:35.952 Current LBA Format: LBA Format #00 00:15:35.952 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:35.952 00:15:35.952 19:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:35.952 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.213 [2024-07-24 19:55:23.967845] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:41.503 Initializing NVMe Controllers 00:15:41.503 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:41.503 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:41.503 Initialization complete. Launching workers. 00:15:41.503 ======================================================== 00:15:41.503 Latency(us) 00:15:41.503 Device Information : IOPS MiB/s Average min max 00:15:41.503 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39942.26 156.02 3204.50 847.03 9790.17 00:15:41.503 ======================================================== 00:15:41.503 Total : 39942.26 156.02 3204.50 847.03 9790.17 00:15:41.503 00:15:41.503 [2024-07-24 19:55:28.988351] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:41.503 19:55:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:41.503 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.503 [2024-07-24 19:55:29.171268] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:46.791 Initializing NVMe Controllers 00:15:46.791 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:46.791 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:46.791 Initialization complete. Launching workers. 00:15:46.791 ======================================================== 00:15:46.791 Latency(us) 00:15:46.791 Device Information : IOPS MiB/s Average min max 00:15:46.791 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16044.01 62.67 7977.53 6525.45 8437.99 00:15:46.791 ======================================================== 00:15:46.791 Total : 16044.01 62.67 7977.53 6525.45 8437.99 00:15:46.791 00:15:46.791 [2024-07-24 19:55:34.207420] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:46.791 19:55:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:46.791 EAL: No free 2048 kB hugepages reported on node 1 00:15:46.791 [2024-07-24 19:55:34.388366] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:52.079 [2024-07-24 19:55:39.463438] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:52.079 Initializing NVMe Controllers 00:15:52.079 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:52.079 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:52.079 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:52.079 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:52.079 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:52.079 Initialization complete. Launching workers. 00:15:52.079 Starting thread on core 2 00:15:52.079 Starting thread on core 3 00:15:52.079 Starting thread on core 1 00:15:52.079 19:55:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:52.079 EAL: No free 2048 kB hugepages reported on node 1 00:15:52.079 [2024-07-24 19:55:39.714590] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:55.378 [2024-07-24 19:55:42.776699] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:55.378 Initializing NVMe Controllers 00:15:55.378 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:55.378 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:55.378 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:55.378 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:55.378 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:55.378 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:55.378 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:55.378 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:55.378 Initialization complete. Launching workers. 00:15:55.378 Starting thread on core 1 with urgent priority queue 00:15:55.378 Starting thread on core 2 with urgent priority queue 00:15:55.378 Starting thread on core 3 with urgent priority queue 00:15:55.378 Starting thread on core 0 with urgent priority queue 00:15:55.378 SPDK bdev Controller (SPDK1 ) core 0: 12158.33 IO/s 8.22 secs/100000 ios 00:15:55.378 SPDK bdev Controller (SPDK1 ) core 1: 15050.33 IO/s 6.64 secs/100000 ios 00:15:55.378 SPDK bdev Controller (SPDK1 ) core 2: 8004.33 IO/s 12.49 secs/100000 ios 00:15:55.378 SPDK bdev Controller (SPDK1 ) core 3: 13621.67 IO/s 7.34 secs/100000 ios 00:15:55.378 ======================================================== 00:15:55.378 00:15:55.378 19:55:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:55.378 EAL: No free 2048 kB hugepages reported on node 1 00:15:55.378 [2024-07-24 19:55:43.041634] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:55.378 Initializing NVMe Controllers 00:15:55.378 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:55.378 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:55.378 Namespace ID: 1 size: 0GB 00:15:55.378 Initialization complete. 00:15:55.378 INFO: using host memory buffer for IO 00:15:55.378 Hello world! 00:15:55.378 [2024-07-24 19:55:43.075815] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:55.378 19:55:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:55.378 EAL: No free 2048 kB hugepages reported on node 1 00:15:55.378 [2024-07-24 19:55:43.331599] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:56.761 Initializing NVMe Controllers 00:15:56.761 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:56.761 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:56.761 Initialization complete. Launching workers. 00:15:56.761 submit (in ns) avg, min, max = 7021.0, 3918.3, 4000317.5 00:15:56.761 complete (in ns) avg, min, max = 19328.3, 2364.2, 5992122.5 00:15:56.761 00:15:56.761 Submit histogram 00:15:56.761 ================ 00:15:56.761 Range in us Cumulative Count 00:15:56.761 3.893 - 3.920: 0.0053% ( 1) 00:15:56.761 3.920 - 3.947: 2.3549% ( 446) 00:15:56.762 3.947 - 3.973: 10.2834% ( 1505) 00:15:56.762 3.973 - 4.000: 20.6617% ( 1970) 00:15:56.762 4.000 - 4.027: 32.0620% ( 2164) 00:15:56.762 4.027 - 4.053: 43.1619% ( 2107) 00:15:56.762 4.053 - 4.080: 55.6791% ( 2376) 00:15:56.762 4.080 - 4.107: 71.9734% ( 3093) 00:15:56.762 4.107 - 4.133: 86.1606% ( 2693) 00:15:56.762 4.133 - 4.160: 94.0786% ( 1503) 00:15:56.762 4.160 - 4.187: 97.6030% ( 669) 00:15:56.762 4.187 - 4.213: 98.8252% ( 232) 00:15:56.762 4.213 - 4.240: 99.2414% ( 79) 00:15:56.762 4.240 - 4.267: 99.4152% ( 33) 00:15:56.762 4.267 - 4.293: 99.4785% ( 12) 00:15:56.762 4.293 - 4.320: 99.4837% ( 1) 00:15:56.762 4.347 - 4.373: 99.4890% ( 1) 00:15:56.762 4.373 - 4.400: 99.4943% ( 1) 00:15:56.762 4.747 - 4.773: 99.4995% ( 1) 00:15:56.762 4.853 - 4.880: 99.5048% ( 1) 00:15:56.762 4.933 - 4.960: 99.5101% ( 1) 00:15:56.762 4.987 - 5.013: 99.5153% ( 1) 00:15:56.762 5.280 - 5.307: 99.5206% ( 1) 00:15:56.762 5.333 - 5.360: 99.5259% ( 1) 00:15:56.762 5.387 - 5.413: 99.5311% ( 1) 00:15:56.762 5.413 - 5.440: 99.5417% ( 2) 00:15:56.762 5.600 - 5.627: 99.5469% ( 1) 00:15:56.762 5.707 - 5.733: 99.5627% ( 3) 00:15:56.762 5.733 - 5.760: 99.5680% ( 1) 00:15:56.762 5.840 - 5.867: 99.5733% ( 1) 00:15:56.762 5.867 - 5.893: 99.5785% ( 1) 00:15:56.762 5.947 - 5.973: 99.5838% ( 1) 00:15:56.762 5.973 - 6.000: 99.5891% ( 1) 00:15:56.762 6.133 - 6.160: 99.5944% ( 1) 00:15:56.762 6.160 - 6.187: 99.5996% ( 1) 00:15:56.762 6.187 - 6.213: 99.6049% ( 1) 00:15:56.762 6.213 - 6.240: 99.6102% ( 1) 00:15:56.762 6.267 - 6.293: 99.6154% ( 1) 00:15:56.762 6.293 - 6.320: 99.6207% ( 1) 00:15:56.762 6.320 - 6.347: 99.6260% ( 1) 00:15:56.762 6.347 - 6.373: 99.6470% ( 4) 00:15:56.762 6.373 - 6.400: 99.6576% ( 2) 00:15:56.762 6.400 - 6.427: 99.6628% ( 1) 00:15:56.762 6.453 - 6.480: 99.6681% ( 1) 00:15:56.762 6.507 - 6.533: 99.6786% ( 2) 00:15:56.762 6.533 - 6.560: 99.6839% ( 1) 00:15:56.762 6.640 - 6.667: 99.6892% ( 1) 00:15:56.762 6.693 - 6.720: 99.6997% ( 2) 00:15:56.762 6.773 - 6.800: 99.7103% ( 2) 00:15:56.762 6.827 - 6.880: 99.7261% ( 3) 00:15:56.762 6.880 - 6.933: 99.7366% ( 2) 00:15:56.762 6.933 - 6.987: 99.7419% ( 1) 00:15:56.762 6.987 - 7.040: 99.7629% ( 4) 00:15:56.762 7.040 - 7.093: 99.7787% ( 3) 00:15:56.762 7.093 - 7.147: 99.7945% ( 3) 00:15:56.762 7.147 - 7.200: 99.8051% ( 2) 00:15:56.762 7.253 - 7.307: 99.8156% ( 2) 00:15:56.762 7.307 - 7.360: 99.8314% ( 3) 00:15:56.762 7.413 - 7.467: 99.8367% ( 1) 00:15:56.762 7.467 - 7.520: 99.8420% ( 1) 00:15:56.762 7.520 - 7.573: 99.8472% ( 1) 00:15:56.762 7.733 - 7.787: 99.8578% ( 2) 00:15:56.762 7.840 - 7.893: 99.8683% ( 2) 00:15:56.762 7.893 - 7.947: 99.8736% ( 1) 00:15:56.762 7.947 - 8.000: 99.8788% ( 1) 00:15:56.762 8.000 - 8.053: 99.8894% ( 2) 00:15:56.762 8.053 - 8.107: 99.8946% ( 1) 00:15:56.762 8.800 - 8.853: 99.9052% ( 2) 00:15:56.762 8.907 - 8.960: 99.9104% ( 1) 00:15:56.762 14.400 - 14.507: 99.9157% ( 1) 00:15:56.762 15.253 - 15.360: 99.9210% ( 1) 00:15:56.762 15.360 - 15.467: 99.9262% ( 1) 00:15:56.762 3986.773 - 4014.080: 100.0000% ( 14) 00:15:56.762 00:15:56.762 Complete histogram 00:15:56.762 ================== 00:15:56.762 Range in us Cumulative Count 00:15:56.762 2.360 - 2.373: 0.0053% ( 1) 00:15:56.762 2.373 - 2.387: 0.0896% ( 16) 00:15:56.762 2.387 - 2.400: 0.9851% ( 170) 00:15:56.762 2.400 - 2.413: 1.0800% ( 18) 00:15:56.762 2.413 - [2024-07-24 19:55:44.352197] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:56.762 2.427: 1.2064% ( 24) 00:15:56.762 2.427 - 2.440: 1.2538% ( 9) 00:15:56.762 2.440 - 2.453: 1.3065% ( 10) 00:15:56.762 2.453 - 2.467: 43.5781% ( 8024) 00:15:56.762 2.467 - 2.480: 57.4860% ( 2640) 00:15:56.762 2.480 - 2.493: 72.8848% ( 2923) 00:15:56.762 2.493 - 2.507: 79.5280% ( 1261) 00:15:56.762 2.507 - 2.520: 81.4877% ( 372) 00:15:56.762 2.520 - 2.533: 85.3862% ( 740) 00:15:56.762 2.533 - 2.547: 91.3339% ( 1129) 00:15:56.762 2.547 - 2.560: 95.4167% ( 775) 00:15:56.762 2.560 - 2.573: 97.6293% ( 420) 00:15:56.762 2.573 - 2.587: 98.8199% ( 226) 00:15:56.762 2.587 - 2.600: 99.2203% ( 76) 00:15:56.762 2.600 - 2.613: 99.2677% ( 9) 00:15:56.762 2.613 - 2.627: 99.3046% ( 7) 00:15:56.762 2.760 - 2.773: 99.3099% ( 1) 00:15:56.762 2.907 - 2.920: 99.3151% ( 1) 00:15:56.762 4.720 - 4.747: 99.3204% ( 1) 00:15:56.762 4.747 - 4.773: 99.3257% ( 1) 00:15:56.762 4.800 - 4.827: 99.3309% ( 1) 00:15:56.762 4.933 - 4.960: 99.3362% ( 1) 00:15:56.762 4.960 - 4.987: 99.3415% ( 1) 00:15:56.762 5.040 - 5.067: 99.3467% ( 1) 00:15:56.762 5.093 - 5.120: 99.3520% ( 1) 00:15:56.762 5.147 - 5.173: 99.3573% ( 1) 00:15:56.762 5.200 - 5.227: 99.3784% ( 4) 00:15:56.762 5.227 - 5.253: 99.3836% ( 1) 00:15:56.762 5.253 - 5.280: 99.3889% ( 1) 00:15:56.762 5.280 - 5.307: 99.3994% ( 2) 00:15:56.762 5.333 - 5.360: 99.4152% ( 3) 00:15:56.762 5.360 - 5.387: 99.4310% ( 3) 00:15:56.762 5.387 - 5.413: 99.4416% ( 2) 00:15:56.762 5.440 - 5.467: 99.4468% ( 1) 00:15:56.762 5.467 - 5.493: 99.4626% ( 3) 00:15:56.762 5.520 - 5.547: 99.4679% ( 1) 00:15:56.762 5.573 - 5.600: 99.4732% ( 1) 00:15:56.762 5.627 - 5.653: 99.4785% ( 1) 00:15:56.762 5.653 - 5.680: 99.4837% ( 1) 00:15:56.762 5.680 - 5.707: 99.4890% ( 1) 00:15:56.762 5.707 - 5.733: 99.4943% ( 1) 00:15:56.762 5.733 - 5.760: 99.4995% ( 1) 00:15:56.762 5.760 - 5.787: 99.5048% ( 1) 00:15:56.762 5.813 - 5.840: 99.5101% ( 1) 00:15:56.762 5.840 - 5.867: 99.5153% ( 1) 00:15:56.762 5.973 - 6.000: 99.5206% ( 1) 00:15:56.762 6.080 - 6.107: 99.5259% ( 1) 00:15:56.762 6.133 - 6.160: 99.5311% ( 1) 00:15:56.762 6.160 - 6.187: 99.5364% ( 1) 00:15:56.762 6.187 - 6.213: 99.5417% ( 1) 00:15:56.762 6.213 - 6.240: 99.5469% ( 1) 00:15:56.762 6.453 - 6.480: 99.5522% ( 1) 00:15:56.762 6.560 - 6.587: 99.5575% ( 1) 00:15:56.762 6.827 - 6.880: 99.5627% ( 1) 00:15:56.762 8.213 - 8.267: 99.5680% ( 1) 00:15:56.762 11.200 - 11.253: 99.5733% ( 1) 00:15:56.762 12.693 - 12.747: 99.5785% ( 1) 00:15:56.762 1993.387 - 2007.040: 99.5838% ( 1) 00:15:56.762 3986.773 - 4014.080: 99.9947% ( 78) 00:15:56.762 5980.160 - 6007.467: 100.0000% ( 1) 00:15:56.762 00:15:56.762 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:56.762 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:56.762 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:56.762 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:56.762 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:56.762 [ 00:15:56.762 { 00:15:56.762 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:56.762 "subtype": "Discovery", 00:15:56.762 "listen_addresses": [], 00:15:56.762 "allow_any_host": true, 00:15:56.762 "hosts": [] 00:15:56.762 }, 00:15:56.762 { 00:15:56.762 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:56.762 "subtype": "NVMe", 00:15:56.762 "listen_addresses": [ 00:15:56.762 { 00:15:56.762 "trtype": "VFIOUSER", 00:15:56.762 "adrfam": "IPv4", 00:15:56.762 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:56.762 "trsvcid": "0" 00:15:56.762 } 00:15:56.762 ], 00:15:56.762 "allow_any_host": true, 00:15:56.762 "hosts": [], 00:15:56.762 "serial_number": "SPDK1", 00:15:56.762 "model_number": "SPDK bdev Controller", 00:15:56.762 "max_namespaces": 32, 00:15:56.762 "min_cntlid": 1, 00:15:56.762 "max_cntlid": 65519, 00:15:56.762 "namespaces": [ 00:15:56.762 { 00:15:56.762 "nsid": 1, 00:15:56.762 "bdev_name": "Malloc1", 00:15:56.762 "name": "Malloc1", 00:15:56.762 "nguid": "90802E6CBAED4372A086D99190B36A80", 00:15:56.762 "uuid": "90802e6c-baed-4372-a086-d99190b36a80" 00:15:56.762 } 00:15:56.762 ] 00:15:56.762 }, 00:15:56.762 { 00:15:56.762 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:56.762 "subtype": "NVMe", 00:15:56.762 "listen_addresses": [ 00:15:56.762 { 00:15:56.762 "trtype": "VFIOUSER", 00:15:56.762 "adrfam": "IPv4", 00:15:56.762 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:56.763 "trsvcid": "0" 00:15:56.763 } 00:15:56.763 ], 00:15:56.763 "allow_any_host": true, 00:15:56.763 "hosts": [], 00:15:56.763 "serial_number": "SPDK2", 00:15:56.763 "model_number": "SPDK bdev Controller", 00:15:56.763 "max_namespaces": 32, 00:15:56.763 "min_cntlid": 1, 00:15:56.763 "max_cntlid": 65519, 00:15:56.763 "namespaces": [ 00:15:56.763 { 00:15:56.763 "nsid": 1, 00:15:56.763 "bdev_name": "Malloc2", 00:15:56.763 "name": "Malloc2", 00:15:56.763 "nguid": "3CBEFFFECFBB4F23BE00C7AD4A467777", 00:15:56.763 "uuid": "3cbefffe-cfbb-4f23-be00-c7ad4a467777" 00:15:56.763 } 00:15:56.763 ] 00:15:56.763 } 00:15:56.763 ] 00:15:56.763 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:56.763 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:56.763 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3648272 00:15:56.763 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:56.763 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:56.763 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:56.763 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:56.763 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:56.763 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:56.763 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:56.763 EAL: No free 2048 kB hugepages reported on node 1 00:15:57.023 Malloc3 00:15:57.023 [2024-07-24 19:55:44.736651] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:57.023 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:57.023 [2024-07-24 19:55:44.906827] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:57.023 19:55:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:57.023 Asynchronous Event Request test 00:15:57.023 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:57.023 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:57.023 Registering asynchronous event callbacks... 00:15:57.023 Starting namespace attribute notice tests for all controllers... 00:15:57.023 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:57.023 aer_cb - Changed Namespace 00:15:57.023 Cleaning up... 00:15:57.285 [ 00:15:57.285 { 00:15:57.285 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:57.285 "subtype": "Discovery", 00:15:57.285 "listen_addresses": [], 00:15:57.285 "allow_any_host": true, 00:15:57.285 "hosts": [] 00:15:57.285 }, 00:15:57.285 { 00:15:57.285 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:57.285 "subtype": "NVMe", 00:15:57.285 "listen_addresses": [ 00:15:57.285 { 00:15:57.285 "trtype": "VFIOUSER", 00:15:57.285 "adrfam": "IPv4", 00:15:57.285 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:57.285 "trsvcid": "0" 00:15:57.285 } 00:15:57.285 ], 00:15:57.285 "allow_any_host": true, 00:15:57.285 "hosts": [], 00:15:57.285 "serial_number": "SPDK1", 00:15:57.285 "model_number": "SPDK bdev Controller", 00:15:57.285 "max_namespaces": 32, 00:15:57.285 "min_cntlid": 1, 00:15:57.285 "max_cntlid": 65519, 00:15:57.285 "namespaces": [ 00:15:57.285 { 00:15:57.285 "nsid": 1, 00:15:57.285 "bdev_name": "Malloc1", 00:15:57.285 "name": "Malloc1", 00:15:57.285 "nguid": "90802E6CBAED4372A086D99190B36A80", 00:15:57.285 "uuid": "90802e6c-baed-4372-a086-d99190b36a80" 00:15:57.285 }, 00:15:57.285 { 00:15:57.285 "nsid": 2, 00:15:57.285 "bdev_name": "Malloc3", 00:15:57.285 "name": "Malloc3", 00:15:57.285 "nguid": "51F8C4B03390417ABF0CDC7D19342288", 00:15:57.285 "uuid": "51f8c4b0-3390-417a-bf0c-dc7d19342288" 00:15:57.285 } 00:15:57.285 ] 00:15:57.285 }, 00:15:57.285 { 00:15:57.285 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:57.285 "subtype": "NVMe", 00:15:57.285 "listen_addresses": [ 00:15:57.285 { 00:15:57.285 "trtype": "VFIOUSER", 00:15:57.285 "adrfam": "IPv4", 00:15:57.285 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:57.285 "trsvcid": "0" 00:15:57.285 } 00:15:57.285 ], 00:15:57.285 "allow_any_host": true, 00:15:57.286 "hosts": [], 00:15:57.286 "serial_number": "SPDK2", 00:15:57.286 "model_number": "SPDK bdev Controller", 00:15:57.286 "max_namespaces": 32, 00:15:57.286 "min_cntlid": 1, 00:15:57.286 "max_cntlid": 65519, 00:15:57.286 "namespaces": [ 00:15:57.286 { 00:15:57.286 "nsid": 1, 00:15:57.286 "bdev_name": "Malloc2", 00:15:57.286 "name": "Malloc2", 00:15:57.286 "nguid": "3CBEFFFECFBB4F23BE00C7AD4A467777", 00:15:57.286 "uuid": "3cbefffe-cfbb-4f23-be00-c7ad4a467777" 00:15:57.286 } 00:15:57.286 ] 00:15:57.286 } 00:15:57.286 ] 00:15:57.286 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3648272 00:15:57.286 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:57.286 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:57.286 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:57.286 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:57.286 [2024-07-24 19:55:45.117240] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:15:57.286 [2024-07-24 19:55:45.117319] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3648288 ] 00:15:57.286 EAL: No free 2048 kB hugepages reported on node 1 00:15:57.286 [2024-07-24 19:55:45.154742] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:57.286 [2024-07-24 19:55:45.163430] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:57.286 [2024-07-24 19:55:45.163451] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f91c932a000 00:15:57.286 [2024-07-24 19:55:45.164435] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:57.286 [2024-07-24 19:55:45.165433] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:57.286 [2024-07-24 19:55:45.166445] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:57.286 [2024-07-24 19:55:45.167458] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:57.286 [2024-07-24 19:55:45.168462] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:57.286 [2024-07-24 19:55:45.169467] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:57.286 [2024-07-24 19:55:45.170471] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:57.286 [2024-07-24 19:55:45.171478] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:57.286 [2024-07-24 19:55:45.172485] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:57.286 [2024-07-24 19:55:45.172495] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f91c931f000 00:15:57.286 [2024-07-24 19:55:45.173819] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:57.286 [2024-07-24 19:55:45.190036] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:57.286 [2024-07-24 19:55:45.190061] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:57.286 [2024-07-24 19:55:45.195129] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:57.286 [2024-07-24 19:55:45.195174] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:57.286 [2024-07-24 19:55:45.195262] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:57.286 [2024-07-24 19:55:45.195277] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:57.286 [2024-07-24 19:55:45.195282] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:57.286 [2024-07-24 19:55:45.196136] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:57.286 [2024-07-24 19:55:45.196147] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:57.286 [2024-07-24 19:55:45.196155] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:57.286 [2024-07-24 19:55:45.197137] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:57.286 [2024-07-24 19:55:45.197147] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:57.286 [2024-07-24 19:55:45.197154] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:57.286 [2024-07-24 19:55:45.198140] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:57.286 [2024-07-24 19:55:45.198150] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:57.286 [2024-07-24 19:55:45.199147] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:57.286 [2024-07-24 19:55:45.199157] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:57.286 [2024-07-24 19:55:45.199161] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:57.286 [2024-07-24 19:55:45.199171] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:57.286 [2024-07-24 19:55:45.199277] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:57.286 [2024-07-24 19:55:45.199282] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:57.286 [2024-07-24 19:55:45.199287] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:57.286 [2024-07-24 19:55:45.200154] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:57.286 [2024-07-24 19:55:45.201166] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:57.286 [2024-07-24 19:55:45.202174] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:57.286 [2024-07-24 19:55:45.203175] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:57.286 [2024-07-24 19:55:45.203219] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:57.286 [2024-07-24 19:55:45.204185] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:57.286 [2024-07-24 19:55:45.204193] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:57.286 [2024-07-24 19:55:45.204198] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:57.286 [2024-07-24 19:55:45.204222] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:57.286 [2024-07-24 19:55:45.204230] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:57.286 [2024-07-24 19:55:45.204242] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:57.286 [2024-07-24 19:55:45.204247] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:57.286 [2024-07-24 19:55:45.204251] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:57.286 [2024-07-24 19:55:45.204262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:57.286 [2024-07-24 19:55:45.211209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:57.286 [2024-07-24 19:55:45.211223] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:57.286 [2024-07-24 19:55:45.211228] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:57.286 [2024-07-24 19:55:45.211233] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:57.286 [2024-07-24 19:55:45.211237] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:57.286 [2024-07-24 19:55:45.211242] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:57.286 [2024-07-24 19:55:45.211247] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:57.286 [2024-07-24 19:55:45.211251] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:57.286 [2024-07-24 19:55:45.211262] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:57.286 [2024-07-24 19:55:45.211275] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:57.286 [2024-07-24 19:55:45.219207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:57.286 [2024-07-24 19:55:45.219222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:57.286 [2024-07-24 19:55:45.219231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:57.286 [2024-07-24 19:55:45.219239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:57.286 [2024-07-24 19:55:45.219247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:57.286 [2024-07-24 19:55:45.219252] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:57.287 [2024-07-24 19:55:45.219260] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:57.287 [2024-07-24 19:55:45.219269] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:57.287 [2024-07-24 19:55:45.227207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:57.287 [2024-07-24 19:55:45.227215] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:57.287 [2024-07-24 19:55:45.227220] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:57.287 [2024-07-24 19:55:45.227229] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:57.287 [2024-07-24 19:55:45.227234] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:57.287 [2024-07-24 19:55:45.227243] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:57.287 [2024-07-24 19:55:45.235208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:57.287 [2024-07-24 19:55:45.235274] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:57.287 [2024-07-24 19:55:45.235282] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:57.287 [2024-07-24 19:55:45.235290] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:57.287 [2024-07-24 19:55:45.235295] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:57.287 [2024-07-24 19:55:45.235298] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:57.287 [2024-07-24 19:55:45.235305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:57.549 [2024-07-24 19:55:45.243208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:57.549 [2024-07-24 19:55:45.243224] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:57.550 [2024-07-24 19:55:45.243236] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:57.550 [2024-07-24 19:55:45.243245] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:57.550 [2024-07-24 19:55:45.243252] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:57.550 [2024-07-24 19:55:45.243256] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:57.550 [2024-07-24 19:55:45.243260] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:57.550 [2024-07-24 19:55:45.243266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:57.550 [2024-07-24 19:55:45.251207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:57.550 [2024-07-24 19:55:45.251224] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:57.550 [2024-07-24 19:55:45.251232] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:57.550 [2024-07-24 19:55:45.251240] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:57.550 [2024-07-24 19:55:45.251244] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:57.550 [2024-07-24 19:55:45.251248] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:57.550 [2024-07-24 19:55:45.251254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:57.550 [2024-07-24 19:55:45.259208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:57.550 [2024-07-24 19:55:45.259218] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:57.550 [2024-07-24 19:55:45.259224] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:57.550 [2024-07-24 19:55:45.259233] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:57.550 [2024-07-24 19:55:45.259240] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:15:57.550 [2024-07-24 19:55:45.259245] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:57.550 [2024-07-24 19:55:45.259249] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:57.550 [2024-07-24 19:55:45.259254] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:57.550 [2024-07-24 19:55:45.259259] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:57.550 [2024-07-24 19:55:45.259264] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:57.550 [2024-07-24 19:55:45.259280] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:57.550 [2024-07-24 19:55:45.267206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:57.550 [2024-07-24 19:55:45.267223] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:57.550 [2024-07-24 19:55:45.275206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:57.550 [2024-07-24 19:55:45.275219] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:57.550 [2024-07-24 19:55:45.283208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:57.550 [2024-07-24 19:55:45.283221] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:57.550 [2024-07-24 19:55:45.291206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:57.550 [2024-07-24 19:55:45.291222] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:57.550 [2024-07-24 19:55:45.291227] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:57.550 [2024-07-24 19:55:45.291230] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:57.550 [2024-07-24 19:55:45.291234] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:57.550 [2024-07-24 19:55:45.291237] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:57.550 [2024-07-24 19:55:45.291243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:57.550 [2024-07-24 19:55:45.291251] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:57.550 [2024-07-24 19:55:45.291255] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:57.550 [2024-07-24 19:55:45.291259] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:57.550 [2024-07-24 19:55:45.291264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:57.550 [2024-07-24 19:55:45.291272] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:57.550 [2024-07-24 19:55:45.291276] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:57.550 [2024-07-24 19:55:45.291279] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:57.550 [2024-07-24 19:55:45.291285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:57.550 [2024-07-24 19:55:45.291292] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:57.550 [2024-07-24 19:55:45.291297] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:57.550 [2024-07-24 19:55:45.291300] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:57.550 [2024-07-24 19:55:45.291305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:57.550 [2024-07-24 19:55:45.299208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:57.550 [2024-07-24 19:55:45.299223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:57.550 [2024-07-24 19:55:45.299233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:57.550 [2024-07-24 19:55:45.299240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:57.550 ===================================================== 00:15:57.550 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:57.550 ===================================================== 00:15:57.550 Controller Capabilities/Features 00:15:57.550 ================================ 00:15:57.550 Vendor ID: 4e58 00:15:57.550 Subsystem Vendor ID: 4e58 00:15:57.550 Serial Number: SPDK2 00:15:57.550 Model Number: SPDK bdev Controller 00:15:57.550 Firmware Version: 24.09 00:15:57.550 Recommended Arb Burst: 6 00:15:57.550 IEEE OUI Identifier: 8d 6b 50 00:15:57.550 Multi-path I/O 00:15:57.550 May have multiple subsystem ports: Yes 00:15:57.550 May have multiple controllers: Yes 00:15:57.550 Associated with SR-IOV VF: No 00:15:57.550 Max Data Transfer Size: 131072 00:15:57.550 Max Number of Namespaces: 32 00:15:57.550 Max Number of I/O Queues: 127 00:15:57.550 NVMe Specification Version (VS): 1.3 00:15:57.550 NVMe Specification Version (Identify): 1.3 00:15:57.550 Maximum Queue Entries: 256 00:15:57.550 Contiguous Queues Required: Yes 00:15:57.550 Arbitration Mechanisms Supported 00:15:57.550 Weighted Round Robin: Not Supported 00:15:57.550 Vendor Specific: Not Supported 00:15:57.550 Reset Timeout: 15000 ms 00:15:57.550 Doorbell Stride: 4 bytes 00:15:57.550 NVM Subsystem Reset: Not Supported 00:15:57.550 Command Sets Supported 00:15:57.550 NVM Command Set: Supported 00:15:57.550 Boot Partition: Not Supported 00:15:57.550 Memory Page Size Minimum: 4096 bytes 00:15:57.550 Memory Page Size Maximum: 4096 bytes 00:15:57.550 Persistent Memory Region: Not Supported 00:15:57.550 Optional Asynchronous Events Supported 00:15:57.550 Namespace Attribute Notices: Supported 00:15:57.550 Firmware Activation Notices: Not Supported 00:15:57.550 ANA Change Notices: Not Supported 00:15:57.550 PLE Aggregate Log Change Notices: Not Supported 00:15:57.550 LBA Status Info Alert Notices: Not Supported 00:15:57.550 EGE Aggregate Log Change Notices: Not Supported 00:15:57.550 Normal NVM Subsystem Shutdown event: Not Supported 00:15:57.550 Zone Descriptor Change Notices: Not Supported 00:15:57.550 Discovery Log Change Notices: Not Supported 00:15:57.550 Controller Attributes 00:15:57.550 128-bit Host Identifier: Supported 00:15:57.550 Non-Operational Permissive Mode: Not Supported 00:15:57.550 NVM Sets: Not Supported 00:15:57.550 Read Recovery Levels: Not Supported 00:15:57.550 Endurance Groups: Not Supported 00:15:57.550 Predictable Latency Mode: Not Supported 00:15:57.550 Traffic Based Keep ALive: Not Supported 00:15:57.550 Namespace Granularity: Not Supported 00:15:57.550 SQ Associations: Not Supported 00:15:57.550 UUID List: Not Supported 00:15:57.550 Multi-Domain Subsystem: Not Supported 00:15:57.550 Fixed Capacity Management: Not Supported 00:15:57.550 Variable Capacity Management: Not Supported 00:15:57.550 Delete Endurance Group: Not Supported 00:15:57.551 Delete NVM Set: Not Supported 00:15:57.551 Extended LBA Formats Supported: Not Supported 00:15:57.551 Flexible Data Placement Supported: Not Supported 00:15:57.551 00:15:57.551 Controller Memory Buffer Support 00:15:57.551 ================================ 00:15:57.551 Supported: No 00:15:57.551 00:15:57.551 Persistent Memory Region Support 00:15:57.551 ================================ 00:15:57.551 Supported: No 00:15:57.551 00:15:57.551 Admin Command Set Attributes 00:15:57.551 ============================ 00:15:57.551 Security Send/Receive: Not Supported 00:15:57.551 Format NVM: Not Supported 00:15:57.551 Firmware Activate/Download: Not Supported 00:15:57.551 Namespace Management: Not Supported 00:15:57.551 Device Self-Test: Not Supported 00:15:57.551 Directives: Not Supported 00:15:57.551 NVMe-MI: Not Supported 00:15:57.551 Virtualization Management: Not Supported 00:15:57.551 Doorbell Buffer Config: Not Supported 00:15:57.551 Get LBA Status Capability: Not Supported 00:15:57.551 Command & Feature Lockdown Capability: Not Supported 00:15:57.551 Abort Command Limit: 4 00:15:57.551 Async Event Request Limit: 4 00:15:57.551 Number of Firmware Slots: N/A 00:15:57.551 Firmware Slot 1 Read-Only: N/A 00:15:57.551 Firmware Activation Without Reset: N/A 00:15:57.551 Multiple Update Detection Support: N/A 00:15:57.551 Firmware Update Granularity: No Information Provided 00:15:57.551 Per-Namespace SMART Log: No 00:15:57.551 Asymmetric Namespace Access Log Page: Not Supported 00:15:57.551 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:57.551 Command Effects Log Page: Supported 00:15:57.551 Get Log Page Extended Data: Supported 00:15:57.551 Telemetry Log Pages: Not Supported 00:15:57.551 Persistent Event Log Pages: Not Supported 00:15:57.551 Supported Log Pages Log Page: May Support 00:15:57.551 Commands Supported & Effects Log Page: Not Supported 00:15:57.551 Feature Identifiers & Effects Log Page:May Support 00:15:57.551 NVMe-MI Commands & Effects Log Page: May Support 00:15:57.551 Data Area 4 for Telemetry Log: Not Supported 00:15:57.551 Error Log Page Entries Supported: 128 00:15:57.551 Keep Alive: Supported 00:15:57.551 Keep Alive Granularity: 10000 ms 00:15:57.551 00:15:57.551 NVM Command Set Attributes 00:15:57.551 ========================== 00:15:57.551 Submission Queue Entry Size 00:15:57.551 Max: 64 00:15:57.551 Min: 64 00:15:57.551 Completion Queue Entry Size 00:15:57.551 Max: 16 00:15:57.551 Min: 16 00:15:57.551 Number of Namespaces: 32 00:15:57.551 Compare Command: Supported 00:15:57.551 Write Uncorrectable Command: Not Supported 00:15:57.551 Dataset Management Command: Supported 00:15:57.551 Write Zeroes Command: Supported 00:15:57.551 Set Features Save Field: Not Supported 00:15:57.551 Reservations: Not Supported 00:15:57.551 Timestamp: Not Supported 00:15:57.551 Copy: Supported 00:15:57.551 Volatile Write Cache: Present 00:15:57.551 Atomic Write Unit (Normal): 1 00:15:57.551 Atomic Write Unit (PFail): 1 00:15:57.551 Atomic Compare & Write Unit: 1 00:15:57.551 Fused Compare & Write: Supported 00:15:57.551 Scatter-Gather List 00:15:57.551 SGL Command Set: Supported (Dword aligned) 00:15:57.551 SGL Keyed: Not Supported 00:15:57.551 SGL Bit Bucket Descriptor: Not Supported 00:15:57.551 SGL Metadata Pointer: Not Supported 00:15:57.551 Oversized SGL: Not Supported 00:15:57.551 SGL Metadata Address: Not Supported 00:15:57.551 SGL Offset: Not Supported 00:15:57.551 Transport SGL Data Block: Not Supported 00:15:57.551 Replay Protected Memory Block: Not Supported 00:15:57.551 00:15:57.551 Firmware Slot Information 00:15:57.551 ========================= 00:15:57.551 Active slot: 1 00:15:57.551 Slot 1 Firmware Revision: 24.09 00:15:57.551 00:15:57.551 00:15:57.551 Commands Supported and Effects 00:15:57.551 ============================== 00:15:57.551 Admin Commands 00:15:57.551 -------------- 00:15:57.551 Get Log Page (02h): Supported 00:15:57.551 Identify (06h): Supported 00:15:57.551 Abort (08h): Supported 00:15:57.551 Set Features (09h): Supported 00:15:57.551 Get Features (0Ah): Supported 00:15:57.551 Asynchronous Event Request (0Ch): Supported 00:15:57.551 Keep Alive (18h): Supported 00:15:57.551 I/O Commands 00:15:57.551 ------------ 00:15:57.551 Flush (00h): Supported LBA-Change 00:15:57.551 Write (01h): Supported LBA-Change 00:15:57.551 Read (02h): Supported 00:15:57.551 Compare (05h): Supported 00:15:57.551 Write Zeroes (08h): Supported LBA-Change 00:15:57.551 Dataset Management (09h): Supported LBA-Change 00:15:57.551 Copy (19h): Supported LBA-Change 00:15:57.551 00:15:57.551 Error Log 00:15:57.551 ========= 00:15:57.551 00:15:57.551 Arbitration 00:15:57.551 =========== 00:15:57.551 Arbitration Burst: 1 00:15:57.551 00:15:57.551 Power Management 00:15:57.551 ================ 00:15:57.551 Number of Power States: 1 00:15:57.551 Current Power State: Power State #0 00:15:57.551 Power State #0: 00:15:57.551 Max Power: 0.00 W 00:15:57.551 Non-Operational State: Operational 00:15:57.551 Entry Latency: Not Reported 00:15:57.551 Exit Latency: Not Reported 00:15:57.551 Relative Read Throughput: 0 00:15:57.551 Relative Read Latency: 0 00:15:57.551 Relative Write Throughput: 0 00:15:57.551 Relative Write Latency: 0 00:15:57.551 Idle Power: Not Reported 00:15:57.551 Active Power: Not Reported 00:15:57.551 Non-Operational Permissive Mode: Not Supported 00:15:57.551 00:15:57.551 Health Information 00:15:57.551 ================== 00:15:57.551 Critical Warnings: 00:15:57.551 Available Spare Space: OK 00:15:57.551 Temperature: OK 00:15:57.551 Device Reliability: OK 00:15:57.551 Read Only: No 00:15:57.551 Volatile Memory Backup: OK 00:15:57.551 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:57.551 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:57.551 Available Spare: 0% 00:15:57.551 Available Sp[2024-07-24 19:55:45.299338] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:57.551 [2024-07-24 19:55:45.307210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:57.551 [2024-07-24 19:55:45.307241] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:57.551 [2024-07-24 19:55:45.307250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:57.551 [2024-07-24 19:55:45.307256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:57.551 [2024-07-24 19:55:45.307262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:57.551 [2024-07-24 19:55:45.307268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:57.551 [2024-07-24 19:55:45.307312] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:57.551 [2024-07-24 19:55:45.307323] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:57.551 [2024-07-24 19:55:45.308310] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:57.551 [2024-07-24 19:55:45.308359] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:57.551 [2024-07-24 19:55:45.308366] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:57.551 [2024-07-24 19:55:45.309321] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:57.551 [2024-07-24 19:55:45.309337] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:57.551 [2024-07-24 19:55:45.309388] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:57.551 [2024-07-24 19:55:45.310769] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:57.551 are Threshold: 0% 00:15:57.551 Life Percentage Used: 0% 00:15:57.551 Data Units Read: 0 00:15:57.551 Data Units Written: 0 00:15:57.551 Host Read Commands: 0 00:15:57.551 Host Write Commands: 0 00:15:57.551 Controller Busy Time: 0 minutes 00:15:57.551 Power Cycles: 0 00:15:57.551 Power On Hours: 0 hours 00:15:57.551 Unsafe Shutdowns: 0 00:15:57.551 Unrecoverable Media Errors: 0 00:15:57.551 Lifetime Error Log Entries: 0 00:15:57.551 Warning Temperature Time: 0 minutes 00:15:57.551 Critical Temperature Time: 0 minutes 00:15:57.551 00:15:57.551 Number of Queues 00:15:57.551 ================ 00:15:57.551 Number of I/O Submission Queues: 127 00:15:57.552 Number of I/O Completion Queues: 127 00:15:57.552 00:15:57.552 Active Namespaces 00:15:57.552 ================= 00:15:57.552 Namespace ID:1 00:15:57.552 Error Recovery Timeout: Unlimited 00:15:57.552 Command Set Identifier: NVM (00h) 00:15:57.552 Deallocate: Supported 00:15:57.552 Deallocated/Unwritten Error: Not Supported 00:15:57.552 Deallocated Read Value: Unknown 00:15:57.552 Deallocate in Write Zeroes: Not Supported 00:15:57.552 Deallocated Guard Field: 0xFFFF 00:15:57.552 Flush: Supported 00:15:57.552 Reservation: Supported 00:15:57.552 Namespace Sharing Capabilities: Multiple Controllers 00:15:57.552 Size (in LBAs): 131072 (0GiB) 00:15:57.552 Capacity (in LBAs): 131072 (0GiB) 00:15:57.552 Utilization (in LBAs): 131072 (0GiB) 00:15:57.552 NGUID: 3CBEFFFECFBB4F23BE00C7AD4A467777 00:15:57.552 UUID: 3cbefffe-cfbb-4f23-be00-c7ad4a467777 00:15:57.552 Thin Provisioning: Not Supported 00:15:57.552 Per-NS Atomic Units: Yes 00:15:57.552 Atomic Boundary Size (Normal): 0 00:15:57.552 Atomic Boundary Size (PFail): 0 00:15:57.552 Atomic Boundary Offset: 0 00:15:57.552 Maximum Single Source Range Length: 65535 00:15:57.552 Maximum Copy Length: 65535 00:15:57.552 Maximum Source Range Count: 1 00:15:57.552 NGUID/EUI64 Never Reused: No 00:15:57.552 Namespace Write Protected: No 00:15:57.552 Number of LBA Formats: 1 00:15:57.552 Current LBA Format: LBA Format #00 00:15:57.552 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:57.552 00:15:57.552 19:55:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:57.552 EAL: No free 2048 kB hugepages reported on node 1 00:15:57.552 [2024-07-24 19:55:45.496027] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:02.839 Initializing NVMe Controllers 00:16:02.839 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:02.839 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:02.839 Initialization complete. Launching workers. 00:16:02.839 ======================================================== 00:16:02.839 Latency(us) 00:16:02.839 Device Information : IOPS MiB/s Average min max 00:16:02.839 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40102.33 156.65 3191.51 842.29 6807.14 00:16:02.839 ======================================================== 00:16:02.839 Total : 40102.33 156.65 3191.51 842.29 6807.14 00:16:02.839 00:16:02.839 [2024-07-24 19:55:50.593382] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:02.839 19:55:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:02.839 EAL: No free 2048 kB hugepages reported on node 1 00:16:02.839 [2024-07-24 19:55:50.773939] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:08.174 Initializing NVMe Controllers 00:16:08.174 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:08.174 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:08.174 Initialization complete. Launching workers. 00:16:08.174 ======================================================== 00:16:08.174 Latency(us) 00:16:08.174 Device Information : IOPS MiB/s Average min max 00:16:08.174 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35266.00 137.76 3629.91 1108.10 10361.44 00:16:08.174 ======================================================== 00:16:08.174 Total : 35266.00 137.76 3629.91 1108.10 10361.44 00:16:08.174 00:16:08.174 [2024-07-24 19:55:55.794506] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:08.174 19:55:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:08.174 EAL: No free 2048 kB hugepages reported on node 1 00:16:08.174 [2024-07-24 19:55:55.983684] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:13.467 [2024-07-24 19:56:01.128280] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:13.467 Initializing NVMe Controllers 00:16:13.467 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:13.467 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:13.467 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:13.467 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:13.467 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:13.467 Initialization complete. Launching workers. 00:16:13.467 Starting thread on core 2 00:16:13.467 Starting thread on core 3 00:16:13.467 Starting thread on core 1 00:16:13.467 19:56:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:13.467 EAL: No free 2048 kB hugepages reported on node 1 00:16:13.467 [2024-07-24 19:56:01.389679] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:16.768 [2024-07-24 19:56:04.437310] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:16.768 Initializing NVMe Controllers 00:16:16.768 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:16.768 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:16.768 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:16.768 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:16.768 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:16.768 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:16.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:16.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:16.768 Initialization complete. Launching workers. 00:16:16.768 Starting thread on core 1 with urgent priority queue 00:16:16.768 Starting thread on core 2 with urgent priority queue 00:16:16.768 Starting thread on core 3 with urgent priority queue 00:16:16.768 Starting thread on core 0 with urgent priority queue 00:16:16.768 SPDK bdev Controller (SPDK2 ) core 0: 17332.33 IO/s 5.77 secs/100000 ios 00:16:16.768 SPDK bdev Controller (SPDK2 ) core 1: 6762.67 IO/s 14.79 secs/100000 ios 00:16:16.768 SPDK bdev Controller (SPDK2 ) core 2: 15886.67 IO/s 6.29 secs/100000 ios 00:16:16.768 SPDK bdev Controller (SPDK2 ) core 3: 11481.00 IO/s 8.71 secs/100000 ios 00:16:16.768 ======================================================== 00:16:16.768 00:16:16.768 19:56:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:16.768 EAL: No free 2048 kB hugepages reported on node 1 00:16:16.768 [2024-07-24 19:56:04.697560] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:16.768 Initializing NVMe Controllers 00:16:16.768 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:16.768 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:16.768 Namespace ID: 1 size: 0GB 00:16:16.768 Initialization complete. 00:16:16.768 INFO: using host memory buffer for IO 00:16:16.768 Hello world! 00:16:16.768 [2024-07-24 19:56:04.708639] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:17.029 19:56:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:17.029 EAL: No free 2048 kB hugepages reported on node 1 00:16:17.029 [2024-07-24 19:56:04.968196] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:18.415 Initializing NVMe Controllers 00:16:18.415 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:18.415 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:18.415 Initialization complete. Launching workers. 00:16:18.415 submit (in ns) avg, min, max = 8224.9, 3910.8, 4000468.3 00:16:18.415 complete (in ns) avg, min, max = 18060.9, 2369.2, 4993724.2 00:16:18.415 00:16:18.415 Submit histogram 00:16:18.415 ================ 00:16:18.415 Range in us Cumulative Count 00:16:18.415 3.893 - 3.920: 0.4423% ( 85) 00:16:18.415 3.920 - 3.947: 3.3774% ( 564) 00:16:18.415 3.947 - 3.973: 12.7498% ( 1801) 00:16:18.415 3.973 - 4.000: 24.0841% ( 2178) 00:16:18.415 4.000 - 4.027: 34.8980% ( 2078) 00:16:18.415 4.027 - 4.053: 45.4465% ( 2027) 00:16:18.415 4.053 - 4.080: 59.3256% ( 2667) 00:16:18.415 4.080 - 4.107: 74.0893% ( 2837) 00:16:18.415 4.107 - 4.133: 87.6405% ( 2604) 00:16:18.415 4.133 - 4.160: 95.2956% ( 1471) 00:16:18.415 4.160 - 4.187: 98.1578% ( 550) 00:16:18.415 4.187 - 4.213: 98.9748% ( 157) 00:16:18.415 4.213 - 4.240: 99.2766% ( 58) 00:16:18.415 4.240 - 4.267: 99.3339% ( 11) 00:16:18.415 4.267 - 4.293: 99.3963% ( 12) 00:16:18.415 4.293 - 4.320: 99.4224% ( 5) 00:16:18.415 4.320 - 4.347: 99.4640% ( 8) 00:16:18.415 4.347 - 4.373: 99.4848% ( 4) 00:16:18.415 4.373 - 4.400: 99.4900% ( 1) 00:16:18.415 4.400 - 4.427: 99.4952% ( 1) 00:16:18.415 4.507 - 4.533: 99.5108% ( 3) 00:16:18.415 4.720 - 4.747: 99.5160% ( 1) 00:16:18.415 4.827 - 4.853: 99.5212% ( 1) 00:16:18.415 4.880 - 4.907: 99.5264% ( 1) 00:16:18.415 4.933 - 4.960: 99.5368% ( 2) 00:16:18.415 5.120 - 5.147: 99.5420% ( 1) 00:16:18.415 5.280 - 5.307: 99.5473% ( 1) 00:16:18.415 5.360 - 5.387: 99.5525% ( 1) 00:16:18.415 5.520 - 5.547: 99.5577% ( 1) 00:16:18.415 5.760 - 5.787: 99.5629% ( 1) 00:16:18.415 5.787 - 5.813: 99.5733% ( 2) 00:16:18.415 5.840 - 5.867: 99.5785% ( 1) 00:16:18.415 5.947 - 5.973: 99.5837% ( 1) 00:16:18.415 6.000 - 6.027: 99.5889% ( 1) 00:16:18.415 6.053 - 6.080: 99.5941% ( 1) 00:16:18.415 6.080 - 6.107: 99.5993% ( 1) 00:16:18.415 6.107 - 6.133: 99.6097% ( 2) 00:16:18.415 6.160 - 6.187: 99.6149% ( 1) 00:16:18.415 6.213 - 6.240: 99.6357% ( 4) 00:16:18.415 6.267 - 6.293: 99.6409% ( 1) 00:16:18.415 6.293 - 6.320: 99.6461% ( 1) 00:16:18.415 6.320 - 6.347: 99.6513% ( 1) 00:16:18.415 6.347 - 6.373: 99.6565% ( 1) 00:16:18.415 6.427 - 6.453: 99.6669% ( 2) 00:16:18.415 6.480 - 6.507: 99.6774% ( 2) 00:16:18.415 6.533 - 6.560: 99.6826% ( 1) 00:16:18.415 6.560 - 6.587: 99.7034% ( 4) 00:16:18.415 6.587 - 6.613: 99.7086% ( 1) 00:16:18.415 6.613 - 6.640: 99.7138% ( 1) 00:16:18.416 6.693 - 6.720: 99.7190% ( 1) 00:16:18.416 6.720 - 6.747: 99.7242% ( 1) 00:16:18.416 6.773 - 6.800: 99.7294% ( 1) 00:16:18.416 6.827 - 6.880: 99.7502% ( 4) 00:16:18.416 6.880 - 6.933: 99.7554% ( 1) 00:16:18.416 6.933 - 6.987: 99.7606% ( 1) 00:16:18.416 6.987 - 7.040: 99.7658% ( 1) 00:16:18.416 7.093 - 7.147: 99.7762% ( 2) 00:16:18.416 7.147 - 7.200: 99.7918% ( 3) 00:16:18.416 7.200 - 7.253: 99.7970% ( 1) 00:16:18.416 7.253 - 7.307: 99.8022% ( 1) 00:16:18.416 7.307 - 7.360: 99.8075% ( 1) 00:16:18.416 7.360 - 7.413: 99.8127% ( 1) 00:16:18.416 7.520 - 7.573: 99.8179% ( 1) 00:16:18.416 7.627 - 7.680: 99.8335% ( 3) 00:16:18.416 7.893 - 7.947: 99.8439% ( 2) 00:16:18.416 8.000 - 8.053: 99.8491% ( 1) 00:16:18.416 8.107 - 8.160: 99.8595% ( 2) 00:16:18.416 8.160 - 8.213: 99.8647% ( 1) 00:16:18.416 8.213 - 8.267: 99.8699% ( 1) 00:16:18.416 8.907 - 8.960: 99.8751% ( 1) 00:16:18.416 12.533 - 12.587: 99.8803% ( 1) 00:16:18.416 14.080 - 14.187: 99.8855% ( 1) 00:16:18.416 15.040 - 15.147: 99.8907% ( 1) 00:16:18.416 15.680 - 15.787: 99.8959% ( 1) 00:16:18.416 3986.773 - 4014.080: 100.0000% ( 20) 00:16:18.416 00:16:18.416 Complete histogram 00:16:18.416 ================== 00:16:18.416 Range in us Cumulative Count 00:16:18.416 2.360 - 2.373: 0.0052% ( 1) 00:16:18.416 2.373 - 2.387: 0.1509% ( 28) 00:16:18.416 2.387 - [2024-07-24 19:56:06.063875] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:18.416 2.400: 0.9731% ( 158) 00:16:18.416 2.400 - 2.413: 1.0668% ( 18) 00:16:18.416 2.413 - 2.427: 1.2229% ( 30) 00:16:18.416 2.427 - 2.440: 1.2750% ( 10) 00:16:18.416 2.440 - 2.453: 47.1846% ( 8822) 00:16:18.416 2.453 - 2.467: 54.9074% ( 1484) 00:16:18.416 2.467 - 2.480: 74.1153% ( 3691) 00:16:18.416 2.480 - 2.493: 79.7096% ( 1075) 00:16:18.416 2.493 - 2.507: 81.5518% ( 354) 00:16:18.416 2.507 - 2.520: 85.3143% ( 723) 00:16:18.416 2.520 - 2.533: 90.9190% ( 1077) 00:16:18.416 2.533 - 2.547: 95.2123% ( 825) 00:16:18.416 2.547 - 2.560: 97.5489% ( 449) 00:16:18.416 2.560 - 2.573: 98.9072% ( 261) 00:16:18.416 2.573 - 2.587: 99.2766% ( 71) 00:16:18.416 2.587 - 2.600: 99.3495% ( 14) 00:16:18.416 2.600 - 2.613: 99.3859% ( 7) 00:16:18.416 2.613 - 2.627: 99.3963% ( 2) 00:16:18.416 2.627 - 2.640: 99.4015% ( 1) 00:16:18.416 2.640 - 2.653: 99.4172% ( 3) 00:16:18.416 2.920 - 2.933: 99.4224% ( 1) 00:16:18.416 2.947 - 2.960: 99.4276% ( 1) 00:16:18.416 2.960 - 2.973: 99.4328% ( 1) 00:16:18.416 4.293 - 4.320: 99.4380% ( 1) 00:16:18.416 4.347 - 4.373: 99.4432% ( 1) 00:16:18.416 4.400 - 4.427: 99.4536% ( 2) 00:16:18.416 4.427 - 4.453: 99.4588% ( 1) 00:16:18.416 4.453 - 4.480: 99.4692% ( 2) 00:16:18.416 4.533 - 4.560: 99.4744% ( 1) 00:16:18.416 4.560 - 4.587: 99.4796% ( 1) 00:16:18.416 4.587 - 4.613: 99.4848% ( 1) 00:16:18.416 4.613 - 4.640: 99.4900% ( 1) 00:16:18.416 4.640 - 4.667: 99.5004% ( 2) 00:16:18.416 4.667 - 4.693: 99.5056% ( 1) 00:16:18.416 4.747 - 4.773: 99.5160% ( 2) 00:16:18.416 4.773 - 4.800: 99.5264% ( 2) 00:16:18.416 4.800 - 4.827: 99.5316% ( 1) 00:16:18.416 4.827 - 4.853: 99.5368% ( 1) 00:16:18.416 4.987 - 5.013: 99.5420% ( 1) 00:16:18.416 5.093 - 5.120: 99.5473% ( 1) 00:16:18.416 5.147 - 5.173: 99.5525% ( 1) 00:16:18.416 5.200 - 5.227: 99.5577% ( 1) 00:16:18.416 5.307 - 5.333: 99.5629% ( 1) 00:16:18.416 5.387 - 5.413: 99.5681% ( 1) 00:16:18.416 5.600 - 5.627: 99.5733% ( 1) 00:16:18.416 5.653 - 5.680: 99.5785% ( 1) 00:16:18.416 5.840 - 5.867: 99.5837% ( 1) 00:16:18.416 5.947 - 5.973: 99.5889% ( 1) 00:16:18.416 6.027 - 6.053: 99.5941% ( 1) 00:16:18.416 6.187 - 6.213: 99.5993% ( 1) 00:16:18.416 6.693 - 6.720: 99.6045% ( 1) 00:16:18.416 44.587 - 44.800: 99.6097% ( 1) 00:16:18.416 2853.547 - 2867.200: 99.6149% ( 1) 00:16:18.416 3986.773 - 4014.080: 99.9948% ( 73) 00:16:18.416 4969.813 - 4997.120: 100.0000% ( 1) 00:16:18.416 00:16:18.416 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:18.416 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:18.416 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:18.416 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:18.416 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:18.416 [ 00:16:18.416 { 00:16:18.416 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:18.416 "subtype": "Discovery", 00:16:18.416 "listen_addresses": [], 00:16:18.416 "allow_any_host": true, 00:16:18.416 "hosts": [] 00:16:18.416 }, 00:16:18.416 { 00:16:18.416 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:18.416 "subtype": "NVMe", 00:16:18.416 "listen_addresses": [ 00:16:18.416 { 00:16:18.416 "trtype": "VFIOUSER", 00:16:18.416 "adrfam": "IPv4", 00:16:18.416 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:18.416 "trsvcid": "0" 00:16:18.416 } 00:16:18.416 ], 00:16:18.416 "allow_any_host": true, 00:16:18.416 "hosts": [], 00:16:18.416 "serial_number": "SPDK1", 00:16:18.416 "model_number": "SPDK bdev Controller", 00:16:18.416 "max_namespaces": 32, 00:16:18.416 "min_cntlid": 1, 00:16:18.416 "max_cntlid": 65519, 00:16:18.416 "namespaces": [ 00:16:18.416 { 00:16:18.416 "nsid": 1, 00:16:18.416 "bdev_name": "Malloc1", 00:16:18.416 "name": "Malloc1", 00:16:18.416 "nguid": "90802E6CBAED4372A086D99190B36A80", 00:16:18.416 "uuid": "90802e6c-baed-4372-a086-d99190b36a80" 00:16:18.416 }, 00:16:18.416 { 00:16:18.416 "nsid": 2, 00:16:18.416 "bdev_name": "Malloc3", 00:16:18.416 "name": "Malloc3", 00:16:18.416 "nguid": "51F8C4B03390417ABF0CDC7D19342288", 00:16:18.416 "uuid": "51f8c4b0-3390-417a-bf0c-dc7d19342288" 00:16:18.416 } 00:16:18.416 ] 00:16:18.416 }, 00:16:18.416 { 00:16:18.416 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:18.416 "subtype": "NVMe", 00:16:18.416 "listen_addresses": [ 00:16:18.416 { 00:16:18.416 "trtype": "VFIOUSER", 00:16:18.416 "adrfam": "IPv4", 00:16:18.416 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:18.416 "trsvcid": "0" 00:16:18.416 } 00:16:18.416 ], 00:16:18.416 "allow_any_host": true, 00:16:18.416 "hosts": [], 00:16:18.416 "serial_number": "SPDK2", 00:16:18.416 "model_number": "SPDK bdev Controller", 00:16:18.416 "max_namespaces": 32, 00:16:18.416 "min_cntlid": 1, 00:16:18.416 "max_cntlid": 65519, 00:16:18.416 "namespaces": [ 00:16:18.416 { 00:16:18.416 "nsid": 1, 00:16:18.416 "bdev_name": "Malloc2", 00:16:18.416 "name": "Malloc2", 00:16:18.416 "nguid": "3CBEFFFECFBB4F23BE00C7AD4A467777", 00:16:18.416 "uuid": "3cbefffe-cfbb-4f23-be00-c7ad4a467777" 00:16:18.416 } 00:16:18.416 ] 00:16:18.416 } 00:16:18.416 ] 00:16:18.416 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:18.416 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3652598 00:16:18.416 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:18.416 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:18.416 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:16:18.416 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:18.416 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:18.416 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:16:18.416 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:18.416 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:18.416 EAL: No free 2048 kB hugepages reported on node 1 00:16:18.678 Malloc4 00:16:18.678 [2024-07-24 19:56:06.453603] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:18.678 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:18.678 [2024-07-24 19:56:06.615643] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:18.939 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:18.939 Asynchronous Event Request test 00:16:18.939 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:18.939 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:18.939 Registering asynchronous event callbacks... 00:16:18.939 Starting namespace attribute notice tests for all controllers... 00:16:18.939 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:18.939 aer_cb - Changed Namespace 00:16:18.939 Cleaning up... 00:16:18.939 [ 00:16:18.939 { 00:16:18.939 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:18.939 "subtype": "Discovery", 00:16:18.939 "listen_addresses": [], 00:16:18.939 "allow_any_host": true, 00:16:18.939 "hosts": [] 00:16:18.939 }, 00:16:18.939 { 00:16:18.939 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:18.939 "subtype": "NVMe", 00:16:18.939 "listen_addresses": [ 00:16:18.939 { 00:16:18.939 "trtype": "VFIOUSER", 00:16:18.939 "adrfam": "IPv4", 00:16:18.939 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:18.939 "trsvcid": "0" 00:16:18.939 } 00:16:18.939 ], 00:16:18.939 "allow_any_host": true, 00:16:18.939 "hosts": [], 00:16:18.939 "serial_number": "SPDK1", 00:16:18.939 "model_number": "SPDK bdev Controller", 00:16:18.939 "max_namespaces": 32, 00:16:18.939 "min_cntlid": 1, 00:16:18.939 "max_cntlid": 65519, 00:16:18.939 "namespaces": [ 00:16:18.939 { 00:16:18.939 "nsid": 1, 00:16:18.939 "bdev_name": "Malloc1", 00:16:18.939 "name": "Malloc1", 00:16:18.939 "nguid": "90802E6CBAED4372A086D99190B36A80", 00:16:18.939 "uuid": "90802e6c-baed-4372-a086-d99190b36a80" 00:16:18.939 }, 00:16:18.939 { 00:16:18.939 "nsid": 2, 00:16:18.939 "bdev_name": "Malloc3", 00:16:18.939 "name": "Malloc3", 00:16:18.939 "nguid": "51F8C4B03390417ABF0CDC7D19342288", 00:16:18.939 "uuid": "51f8c4b0-3390-417a-bf0c-dc7d19342288" 00:16:18.939 } 00:16:18.939 ] 00:16:18.939 }, 00:16:18.939 { 00:16:18.939 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:18.939 "subtype": "NVMe", 00:16:18.939 "listen_addresses": [ 00:16:18.939 { 00:16:18.939 "trtype": "VFIOUSER", 00:16:18.939 "adrfam": "IPv4", 00:16:18.939 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:18.939 "trsvcid": "0" 00:16:18.939 } 00:16:18.939 ], 00:16:18.939 "allow_any_host": true, 00:16:18.939 "hosts": [], 00:16:18.939 "serial_number": "SPDK2", 00:16:18.939 "model_number": "SPDK bdev Controller", 00:16:18.939 "max_namespaces": 32, 00:16:18.939 "min_cntlid": 1, 00:16:18.939 "max_cntlid": 65519, 00:16:18.939 "namespaces": [ 00:16:18.939 { 00:16:18.939 "nsid": 1, 00:16:18.939 "bdev_name": "Malloc2", 00:16:18.939 "name": "Malloc2", 00:16:18.939 "nguid": "3CBEFFFECFBB4F23BE00C7AD4A467777", 00:16:18.939 "uuid": "3cbefffe-cfbb-4f23-be00-c7ad4a467777" 00:16:18.939 }, 00:16:18.939 { 00:16:18.939 "nsid": 2, 00:16:18.939 "bdev_name": "Malloc4", 00:16:18.940 "name": "Malloc4", 00:16:18.940 "nguid": "F14651F2F2E94D19878324C1F403CA38", 00:16:18.940 "uuid": "f14651f2-f2e9-4d19-8783-24c1f403ca38" 00:16:18.940 } 00:16:18.940 ] 00:16:18.940 } 00:16:18.940 ] 00:16:18.940 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3652598 00:16:18.940 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:18.940 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3643547 00:16:18.940 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 3643547 ']' 00:16:18.940 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 3643547 00:16:18.940 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:16:18.940 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:18.940 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3643547 00:16:18.940 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:18.940 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:18.940 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3643547' 00:16:18.940 killing process with pid 3643547 00:16:18.940 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 3643547 00:16:18.940 19:56:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 3643547 00:16:19.201 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:19.201 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:19.201 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:19.201 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:19.201 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:19.201 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3652660 00:16:19.201 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3652660' 00:16:19.201 Process pid: 3652660 00:16:19.201 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:19.201 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:19.201 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3652660 00:16:19.201 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 3652660 ']' 00:16:19.201 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.201 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:19.201 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.201 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:19.201 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:19.201 [2024-07-24 19:56:07.098440] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:19.201 [2024-07-24 19:56:07.099367] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:16:19.201 [2024-07-24 19:56:07.099410] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:19.201 EAL: No free 2048 kB hugepages reported on node 1 00:16:19.461 [2024-07-24 19:56:07.159878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:19.461 [2024-07-24 19:56:07.226427] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:19.461 [2024-07-24 19:56:07.226467] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:19.461 [2024-07-24 19:56:07.226475] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:19.462 [2024-07-24 19:56:07.226481] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:19.462 [2024-07-24 19:56:07.226487] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:19.462 [2024-07-24 19:56:07.226630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.462 [2024-07-24 19:56:07.226754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:19.462 [2024-07-24 19:56:07.226938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.462 [2024-07-24 19:56:07.226939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:19.462 [2024-07-24 19:56:07.290639] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:19.462 [2024-07-24 19:56:07.290651] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:19.462 [2024-07-24 19:56:07.291728] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:19.462 [2024-07-24 19:56:07.292224] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:19.462 [2024-07-24 19:56:07.292320] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:20.033 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:20.033 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:16:20.033 19:56:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:20.976 19:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:21.238 19:56:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:21.238 19:56:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:21.238 19:56:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:21.238 19:56:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:21.238 19:56:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:21.498 Malloc1 00:16:21.498 19:56:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:21.498 19:56:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:21.760 19:56:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:22.020 19:56:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:22.020 19:56:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:22.020 19:56:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:22.020 Malloc2 00:16:22.020 19:56:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:22.281 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:22.543 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:22.543 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:22.543 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3652660 00:16:22.543 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 3652660 ']' 00:16:22.543 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 3652660 00:16:22.543 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:16:22.543 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:22.543 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3652660 00:16:22.543 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:22.543 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:22.543 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3652660' 00:16:22.543 killing process with pid 3652660 00:16:22.543 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 3652660 00:16:22.543 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 3652660 00:16:22.804 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:22.804 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:22.804 00:16:22.804 real 0m50.464s 00:16:22.804 user 3m20.093s 00:16:22.804 sys 0m2.934s 00:16:22.804 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:22.804 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:22.804 ************************************ 00:16:22.804 END TEST nvmf_vfio_user 00:16:22.804 ************************************ 00:16:22.805 19:56:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:22.805 19:56:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:22.805 19:56:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:22.805 19:56:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:22.805 ************************************ 00:16:22.805 START TEST nvmf_vfio_user_nvme_compliance 00:16:22.805 ************************************ 00:16:22.805 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:23.079 * Looking for test storage... 00:16:23.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:23.079 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:23.079 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:23.079 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:23.079 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:23.079 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:23.079 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:23.079 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:23.079 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:23.079 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:23.079 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:23.079 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:23.079 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:23.080 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:23.080 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:23.080 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:23.080 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:23.080 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:23.080 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:23.080 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:23.080 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:23.080 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:23.080 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:23.080 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.080 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.080 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.080 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:23.080 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.080 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:16:23.080 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:23.080 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:23.080 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:23.080 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:23.080 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:23.080 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:23.080 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:23.080 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:23.080 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:23.080 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:23.080 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:23.080 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:23.080 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:23.080 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3653411 00:16:23.080 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3653411' 00:16:23.080 Process pid: 3653411 00:16:23.080 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:23.080 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3653411 00:16:23.080 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:23.080 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 3653411 ']' 00:16:23.081 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.081 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:23.081 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.081 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:23.081 19:56:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:23.081 [2024-07-24 19:56:10.888660] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:16:23.081 [2024-07-24 19:56:10.888751] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:23.081 EAL: No free 2048 kB hugepages reported on node 1 00:16:23.081 [2024-07-24 19:56:10.955976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:23.344 [2024-07-24 19:56:11.032175] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:23.344 [2024-07-24 19:56:11.032225] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:23.344 [2024-07-24 19:56:11.032232] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:23.344 [2024-07-24 19:56:11.032239] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:23.344 [2024-07-24 19:56:11.032244] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:23.344 [2024-07-24 19:56:11.032324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:23.344 [2024-07-24 19:56:11.032459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:23.344 [2024-07-24 19:56:11.032461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.915 19:56:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:23.915 19:56:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:16:23.915 19:56:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:24.855 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:24.855 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:24.855 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:24.855 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.855 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:24.855 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.855 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:24.855 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:24.855 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.855 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:24.855 malloc0 00:16:24.856 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.856 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:24.856 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.856 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:24.856 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.856 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:24.856 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.856 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:24.856 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.856 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:24.856 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.856 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:24.856 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.856 19:56:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:24.856 EAL: No free 2048 kB hugepages reported on node 1 00:16:25.116 00:16:25.116 00:16:25.116 CUnit - A unit testing framework for C - Version 2.1-3 00:16:25.116 http://cunit.sourceforge.net/ 00:16:25.116 00:16:25.116 00:16:25.116 Suite: nvme_compliance 00:16:25.116 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-24 19:56:12.921156] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:25.116 [2024-07-24 19:56:12.922500] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:25.116 [2024-07-24 19:56:12.922511] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:25.116 [2024-07-24 19:56:12.922515] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:25.116 [2024-07-24 19:56:12.924177] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:25.116 passed 00:16:25.116 Test: admin_identify_ctrlr_verify_fused ...[2024-07-24 19:56:13.020774] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:25.116 [2024-07-24 19:56:13.023790] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:25.116 passed 00:16:25.377 Test: admin_identify_ns ...[2024-07-24 19:56:13.120484] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:25.377 [2024-07-24 19:56:13.180215] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:25.377 [2024-07-24 19:56:13.188222] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:25.377 [2024-07-24 19:56:13.209328] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:25.377 passed 00:16:25.377 Test: admin_get_features_mandatory_features ...[2024-07-24 19:56:13.301966] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:25.377 [2024-07-24 19:56:13.304984] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:25.637 passed 00:16:25.637 Test: admin_get_features_optional_features ...[2024-07-24 19:56:13.398562] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:25.637 [2024-07-24 19:56:13.401579] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:25.637 passed 00:16:25.637 Test: admin_set_features_number_of_queues ...[2024-07-24 19:56:13.495683] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:25.898 [2024-07-24 19:56:13.600311] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:25.898 passed 00:16:25.898 Test: admin_get_log_page_mandatory_logs ...[2024-07-24 19:56:13.692323] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:25.898 [2024-07-24 19:56:13.695343] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:25.898 passed 00:16:25.898 Test: admin_get_log_page_with_lpo ...[2024-07-24 19:56:13.790476] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:26.158 [2024-07-24 19:56:13.858211] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:26.158 [2024-07-24 19:56:13.871270] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:26.158 passed 00:16:26.158 Test: fabric_property_get ...[2024-07-24 19:56:13.962902] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:26.158 [2024-07-24 19:56:13.964157] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:26.158 [2024-07-24 19:56:13.965921] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:26.158 passed 00:16:26.158 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-24 19:56:14.058630] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:26.158 [2024-07-24 19:56:14.059893] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:26.158 [2024-07-24 19:56:14.061661] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:26.158 passed 00:16:26.418 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-24 19:56:14.156473] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:26.418 [2024-07-24 19:56:14.240207] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:26.418 [2024-07-24 19:56:14.256208] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:26.418 [2024-07-24 19:56:14.261284] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:26.418 passed 00:16:26.418 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-24 19:56:14.354906] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:26.418 [2024-07-24 19:56:14.356148] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:26.418 [2024-07-24 19:56:14.357927] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:26.679 passed 00:16:26.679 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-24 19:56:14.450074] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:26.679 [2024-07-24 19:56:14.526212] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:26.679 [2024-07-24 19:56:14.550214] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:26.679 [2024-07-24 19:56:14.555295] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:26.679 passed 00:16:26.940 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-24 19:56:14.647936] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:26.940 [2024-07-24 19:56:14.649182] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:26.940 [2024-07-24 19:56:14.649209] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:26.940 [2024-07-24 19:56:14.650951] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:26.940 passed 00:16:26.940 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-24 19:56:14.742488] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:26.940 [2024-07-24 19:56:14.834207] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:26.940 [2024-07-24 19:56:14.842217] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:26.940 [2024-07-24 19:56:14.850212] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:26.940 [2024-07-24 19:56:14.858208] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:26.940 [2024-07-24 19:56:14.887291] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:27.200 passed 00:16:27.200 Test: admin_create_io_sq_verify_pc ...[2024-07-24 19:56:14.981310] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:27.200 [2024-07-24 19:56:15.000213] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:27.200 [2024-07-24 19:56:15.017480] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:27.200 passed 00:16:27.200 Test: admin_create_io_qp_max_qps ...[2024-07-24 19:56:15.106013] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:28.585 [2024-07-24 19:56:16.216212] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:28.846 [2024-07-24 19:56:16.603665] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:28.846 passed 00:16:28.846 Test: admin_create_io_sq_shared_cq ...[2024-07-24 19:56:16.696949] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:29.107 [2024-07-24 19:56:16.829210] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:29.107 [2024-07-24 19:56:16.866273] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:29.107 passed 00:16:29.107 00:16:29.107 Run Summary: Type Total Ran Passed Failed Inactive 00:16:29.107 suites 1 1 n/a 0 0 00:16:29.107 tests 18 18 18 0 0 00:16:29.107 asserts 360 360 360 0 n/a 00:16:29.107 00:16:29.107 Elapsed time = 1.654 seconds 00:16:29.107 19:56:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3653411 00:16:29.107 19:56:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 3653411 ']' 00:16:29.107 19:56:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 3653411 00:16:29.107 19:56:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:16:29.107 19:56:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:29.107 19:56:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3653411 00:16:29.107 19:56:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:29.107 19:56:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:29.107 19:56:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3653411' 00:16:29.107 killing process with pid 3653411 00:16:29.107 19:56:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 3653411 00:16:29.107 19:56:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 3653411 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:29.397 00:16:29.397 real 0m6.432s 00:16:29.397 user 0m18.359s 00:16:29.397 sys 0m0.467s 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:29.397 ************************************ 00:16:29.397 END TEST nvmf_vfio_user_nvme_compliance 00:16:29.397 ************************************ 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:29.397 ************************************ 00:16:29.397 START TEST nvmf_vfio_user_fuzz 00:16:29.397 ************************************ 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:29.397 * Looking for test storage... 00:16:29.397 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:29.397 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:29.398 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:29.398 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:29.398 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:29.398 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:29.398 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:29.398 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:29.398 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:29.398 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:29.398 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3654802 00:16:29.398 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3654802' 00:16:29.398 Process pid: 3654802 00:16:29.398 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:29.398 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3654802 00:16:29.398 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 3654802 ']' 00:16:29.398 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.398 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:29.398 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.398 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:29.398 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:29.398 19:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:30.341 19:56:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:30.341 19:56:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:16:30.341 19:56:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:31.283 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:31.283 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.283 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:31.283 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.283 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:31.283 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:31.283 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.283 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:31.283 malloc0 00:16:31.283 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.283 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:31.283 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.283 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:31.283 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.283 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:31.283 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.283 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:31.283 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.283 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:31.283 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.283 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:31.283 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.283 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:31.283 19:56:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:03.404 Fuzzing completed. Shutting down the fuzz application 00:17:03.404 00:17:03.405 Dumping successful admin opcodes: 00:17:03.405 8, 9, 10, 24, 00:17:03.405 Dumping successful io opcodes: 00:17:03.405 0, 00:17:03.405 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1114069, total successful commands: 4384, random_seed: 934565248 00:17:03.405 NS: 0x200003a1ef00 admin qp, Total commands completed: 140278, total successful commands: 1137, random_seed: 4094922688 00:17:03.405 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:03.405 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.405 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:03.405 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.405 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3654802 00:17:03.405 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 3654802 ']' 00:17:03.405 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 3654802 00:17:03.405 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:17:03.405 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:03.405 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3654802 00:17:03.405 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:03.405 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:03.405 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3654802' 00:17:03.405 killing process with pid 3654802 00:17:03.405 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 3654802 00:17:03.405 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 3654802 00:17:03.405 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:03.405 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:03.405 00:17:03.405 real 0m33.624s 00:17:03.405 user 0m37.768s 00:17:03.405 sys 0m24.915s 00:17:03.405 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:03.405 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:03.405 ************************************ 00:17:03.405 END TEST nvmf_vfio_user_fuzz 00:17:03.405 ************************************ 00:17:03.405 19:56:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:03.405 19:56:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:03.405 19:56:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:03.405 19:56:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:03.405 ************************************ 00:17:03.405 START TEST nvmf_auth_target 00:17:03.405 ************************************ 00:17:03.405 19:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:03.405 * Looking for test storage... 00:17:03.405 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.405 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:03.406 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:03.406 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:03.406 19:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.998 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:09.998 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:09.998 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:09.998 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:09.998 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:09.998 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:09.998 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:09.998 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:09.998 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:09.998 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:17:09.998 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:09.998 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:17:09.998 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:09.999 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:09.999 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:09.999 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:09.999 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:09.999 19:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:10.261 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:10.261 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:10.261 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:10.261 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:10.261 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:10.261 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:10.523 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:10.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:10.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:17:10.523 00:17:10.523 --- 10.0.0.2 ping statistics --- 00:17:10.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.523 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:17:10.523 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:10.523 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:10.523 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.355 ms 00:17:10.523 00:17:10.523 --- 10.0.0.1 ping statistics --- 00:17:10.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.523 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:17:10.523 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:10.523 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:17:10.523 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:10.523 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:10.523 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:10.523 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:10.523 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:10.523 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:10.523 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:10.523 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:17:10.523 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:10.523 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:10.523 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.523 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3665100 00:17:10.523 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3665100 00:17:10.523 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3665100 ']' 00:17:10.523 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.523 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:10.523 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.523 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:10.523 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.523 19:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:11.466 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:11.466 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:11.466 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:11.466 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:11.466 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.466 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:11.466 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=3665133 00:17:11.466 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:11.466 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:11.466 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:17:11.466 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:11.466 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:11.466 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:11.466 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:17:11.466 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c913feb9980585c8512ef6310da9e77c9c623d878556e13d 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.NGB 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c913feb9980585c8512ef6310da9e77c9c623d878556e13d 0 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c913feb9980585c8512ef6310da9e77c9c623d878556e13d 0 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c913feb9980585c8512ef6310da9e77c9c623d878556e13d 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.NGB 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.NGB 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.NGB 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4f9864b8090974f586240627203645114c7170723f9ed3aaedfdafa481d177a2 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.7vG 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4f9864b8090974f586240627203645114c7170723f9ed3aaedfdafa481d177a2 3 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4f9864b8090974f586240627203645114c7170723f9ed3aaedfdafa481d177a2 3 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4f9864b8090974f586240627203645114c7170723f9ed3aaedfdafa481d177a2 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.7vG 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.7vG 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.7vG 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=bda23769ab8c8d12875348c6f30372a6 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.dHW 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key bda23769ab8c8d12875348c6f30372a6 1 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 bda23769ab8c8d12875348c6f30372a6 1 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=bda23769ab8c8d12875348c6f30372a6 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.dHW 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.dHW 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.dHW 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f7cfdd41772574246d40bc786c6e3866531517645707cec4 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.CsD 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f7cfdd41772574246d40bc786c6e3866531517645707cec4 2 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f7cfdd41772574246d40bc786c6e3866531517645707cec4 2 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f7cfdd41772574246d40bc786c6e3866531517645707cec4 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.CsD 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.CsD 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.CsD 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=58138fd489ddf0a74ad6983d353207fb0128213be850e4b7 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Nyp 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 58138fd489ddf0a74ad6983d353207fb0128213be850e4b7 2 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 58138fd489ddf0a74ad6983d353207fb0128213be850e4b7 2 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=58138fd489ddf0a74ad6983d353207fb0128213be850e4b7 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:11.467 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:11.729 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Nyp 00:17:11.729 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Nyp 00:17:11.729 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.Nyp 00:17:11.729 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:17:11.729 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:11.729 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:11.729 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:11.729 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:11.729 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:11.729 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:11.729 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ddb535b02f1a9d8d89db0f5015db68b5 00:17:11.729 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:11.729 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.sUl 00:17:11.729 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ddb535b02f1a9d8d89db0f5015db68b5 1 00:17:11.729 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ddb535b02f1a9d8d89db0f5015db68b5 1 00:17:11.729 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:11.729 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:11.729 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ddb535b02f1a9d8d89db0f5015db68b5 00:17:11.729 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:11.729 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:11.729 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.sUl 00:17:11.729 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.sUl 00:17:11.730 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.sUl 00:17:11.730 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:17:11.730 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:11.730 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:11.730 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:11.730 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:11.730 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:11.730 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:11.730 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c4698cef19537c1cae5e72c8ebd2efb1b8f183db8299c13fa47ddaf6d98011d0 00:17:11.730 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:11.730 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.hxR 00:17:11.730 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c4698cef19537c1cae5e72c8ebd2efb1b8f183db8299c13fa47ddaf6d98011d0 3 00:17:11.730 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c4698cef19537c1cae5e72c8ebd2efb1b8f183db8299c13fa47ddaf6d98011d0 3 00:17:11.730 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:11.730 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:11.730 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c4698cef19537c1cae5e72c8ebd2efb1b8f183db8299c13fa47ddaf6d98011d0 00:17:11.730 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:11.730 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:11.730 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.hxR 00:17:11.730 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.hxR 00:17:11.730 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.hxR 00:17:11.730 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:17:11.730 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 3665100 00:17:11.730 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3665100 ']' 00:17:11.730 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.730 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:11.730 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.730 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:11.730 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.992 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:11.992 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:11.992 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 3665133 /var/tmp/host.sock 00:17:11.992 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3665133 ']' 00:17:11.992 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:17:11.992 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:11.992 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:11.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:11.992 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:11.992 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.992 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:11.992 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:11.992 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:17:11.992 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.992 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.992 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.992 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:11.992 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.NGB 00:17:11.992 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.992 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.992 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.992 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.NGB 00:17:11.992 19:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.NGB 00:17:12.253 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.7vG ]] 00:17:12.253 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7vG 00:17:12.253 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.253 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.253 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.253 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7vG 00:17:12.253 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7vG 00:17:12.513 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:12.513 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.dHW 00:17:12.514 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.514 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.514 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.514 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.dHW 00:17:12.514 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.dHW 00:17:12.514 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.CsD ]] 00:17:12.514 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.CsD 00:17:12.514 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.514 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.514 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.514 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.CsD 00:17:12.514 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.CsD 00:17:12.775 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:12.775 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Nyp 00:17:12.775 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.775 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.775 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.775 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Nyp 00:17:12.775 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Nyp 00:17:13.035 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.sUl ]] 00:17:13.035 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.sUl 00:17:13.035 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.035 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.035 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.035 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.sUl 00:17:13.035 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.sUl 00:17:13.035 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:13.035 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.hxR 00:17:13.035 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.035 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.035 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.035 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.hxR 00:17:13.035 19:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.hxR 00:17:13.298 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:17:13.298 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:13.298 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:13.298 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:13.298 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:13.298 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:13.298 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:17:13.298 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:13.298 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:13.298 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:13.298 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:13.298 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.298 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.298 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.298 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.298 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.298 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.298 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.560 00:17:13.561 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:13.561 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:13.561 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.821 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.821 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.821 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.821 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.821 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.821 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:13.821 { 00:17:13.821 "cntlid": 1, 00:17:13.821 "qid": 0, 00:17:13.821 "state": "enabled", 00:17:13.821 "thread": "nvmf_tgt_poll_group_000", 00:17:13.821 "listen_address": { 00:17:13.821 "trtype": "TCP", 00:17:13.821 "adrfam": "IPv4", 00:17:13.821 "traddr": "10.0.0.2", 00:17:13.821 "trsvcid": "4420" 00:17:13.821 }, 00:17:13.821 "peer_address": { 00:17:13.821 "trtype": "TCP", 00:17:13.821 "adrfam": "IPv4", 00:17:13.821 "traddr": "10.0.0.1", 00:17:13.821 "trsvcid": "46644" 00:17:13.821 }, 00:17:13.821 "auth": { 00:17:13.821 "state": "completed", 00:17:13.821 "digest": "sha256", 00:17:13.821 "dhgroup": "null" 00:17:13.821 } 00:17:13.821 } 00:17:13.821 ]' 00:17:13.821 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:13.821 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:13.821 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:13.821 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:13.821 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:13.821 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.821 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.821 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.082 19:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzkxM2ZlYjk5ODA1ODVjODUxMmVmNjMxMGRhOWU3N2M5YzYyM2Q4Nzg1NTZlMTNk1WKVtA==: --dhchap-ctrl-secret DHHC-1:03:NGY5ODY0YjgwOTA5NzRmNTg2MjQwNjI3MjAzNjQ1MTE0YzcxNzA3MjNmOWVkM2FhZWRmZGFmYTQ4MWQxNzdhMg39dl8=: 00:17:15.024 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.024 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:15.024 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.024 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.024 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.024 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.024 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:15.024 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:15.024 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:17:15.024 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:15.024 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:15.024 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:15.024 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:15.024 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.024 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.024 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.024 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.024 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.024 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.024 19:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.285 00:17:15.285 19:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:15.285 19:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.285 19:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:15.546 19:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.546 19:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.546 19:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.546 19:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.546 19:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.546 19:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:15.546 { 00:17:15.546 "cntlid": 3, 00:17:15.546 "qid": 0, 00:17:15.546 "state": "enabled", 00:17:15.546 "thread": "nvmf_tgt_poll_group_000", 00:17:15.546 "listen_address": { 00:17:15.546 "trtype": "TCP", 00:17:15.546 "adrfam": "IPv4", 00:17:15.546 "traddr": "10.0.0.2", 00:17:15.546 "trsvcid": "4420" 00:17:15.546 }, 00:17:15.546 "peer_address": { 00:17:15.546 "trtype": "TCP", 00:17:15.546 "adrfam": "IPv4", 00:17:15.546 "traddr": "10.0.0.1", 00:17:15.546 "trsvcid": "46666" 00:17:15.546 }, 00:17:15.546 "auth": { 00:17:15.546 "state": "completed", 00:17:15.546 "digest": "sha256", 00:17:15.546 "dhgroup": "null" 00:17:15.546 } 00:17:15.546 } 00:17:15.546 ]' 00:17:15.546 19:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:15.546 19:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:15.546 19:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:15.546 19:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:15.546 19:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:15.546 19:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.546 19:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.546 19:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.807 19:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YmRhMjM3NjlhYjhjOGQxMjg3NTM0OGM2ZjMwMzcyYTbaloD7: --dhchap-ctrl-secret DHHC-1:02:ZjdjZmRkNDE3NzI1NzQyNDZkNDBiYzc4NmM2ZTM4NjY1MzE1MTc2NDU3MDdjZWM0oiehCA==: 00:17:16.379 19:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.379 19:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:16.379 19:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.379 19:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.379 19:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.379 19:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:16.379 19:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:16.379 19:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:16.640 19:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:17:16.640 19:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:16.640 19:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:16.640 19:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:16.640 19:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:16.640 19:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.640 19:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.640 19:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.640 19:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.640 19:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.640 19:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.640 19:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.901 00:17:16.901 19:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:16.901 19:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:16.901 19:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.161 19:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.161 19:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.161 19:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.161 19:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.161 19:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.161 19:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:17.161 { 00:17:17.161 "cntlid": 5, 00:17:17.161 "qid": 0, 00:17:17.161 "state": "enabled", 00:17:17.161 "thread": "nvmf_tgt_poll_group_000", 00:17:17.161 "listen_address": { 00:17:17.161 "trtype": "TCP", 00:17:17.161 "adrfam": "IPv4", 00:17:17.161 "traddr": "10.0.0.2", 00:17:17.161 "trsvcid": "4420" 00:17:17.161 }, 00:17:17.161 "peer_address": { 00:17:17.161 "trtype": "TCP", 00:17:17.161 "adrfam": "IPv4", 00:17:17.161 "traddr": "10.0.0.1", 00:17:17.161 "trsvcid": "39386" 00:17:17.161 }, 00:17:17.161 "auth": { 00:17:17.162 "state": "completed", 00:17:17.162 "digest": "sha256", 00:17:17.162 "dhgroup": "null" 00:17:17.162 } 00:17:17.162 } 00:17:17.162 ]' 00:17:17.162 19:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:17.162 19:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:17.162 19:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:17.162 19:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:17.162 19:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:17.162 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.162 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.162 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.422 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NTgxMzhmZDQ4OWRkZjBhNzRhZDY5ODNkMzUzMjA3ZmIwMTI4MjEzYmU4NTBlNGI3DWv9tw==: --dhchap-ctrl-secret DHHC-1:01:ZGRiNTM1YjAyZjFhOWQ4ZDg5ZGIwZjUwMTVkYjY4YjVdiXTr: 00:17:18.053 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.053 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:18.053 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.053 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.053 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.053 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:18.053 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:18.053 19:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:18.314 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:17:18.314 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:18.314 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:18.314 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:18.314 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:18.314 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.314 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:18.314 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.314 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.314 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.314 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:18.314 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:18.575 00:17:18.575 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.575 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.575 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.836 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.836 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.836 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.836 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.836 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.836 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.836 { 00:17:18.836 "cntlid": 7, 00:17:18.836 "qid": 0, 00:17:18.836 "state": "enabled", 00:17:18.836 "thread": "nvmf_tgt_poll_group_000", 00:17:18.836 "listen_address": { 00:17:18.836 "trtype": "TCP", 00:17:18.836 "adrfam": "IPv4", 00:17:18.836 "traddr": "10.0.0.2", 00:17:18.836 "trsvcid": "4420" 00:17:18.836 }, 00:17:18.836 "peer_address": { 00:17:18.836 "trtype": "TCP", 00:17:18.836 "adrfam": "IPv4", 00:17:18.836 "traddr": "10.0.0.1", 00:17:18.836 "trsvcid": "39424" 00:17:18.836 }, 00:17:18.836 "auth": { 00:17:18.836 "state": "completed", 00:17:18.836 "digest": "sha256", 00:17:18.836 "dhgroup": "null" 00:17:18.836 } 00:17:18.836 } 00:17:18.836 ]' 00:17:18.836 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:18.836 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:18.836 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:18.836 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:18.837 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.837 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.837 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.837 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.098 19:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzQ2OThjZWYxOTUzN2MxY2FlNWU3MmM4ZWJkMmVmYjFiOGYxODNkYjgyOTljMTNmYTQ3ZGRhZjZkOTgwMTFkMLyIbCg=: 00:17:19.668 19:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.668 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.668 19:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:19.668 19:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.668 19:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.668 19:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.668 19:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:19.668 19:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.668 19:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:19.668 19:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:19.928 19:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:17:19.928 19:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:19.929 19:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:19.929 19:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:19.929 19:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:19.929 19:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.929 19:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.929 19:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.929 19:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.929 19:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.929 19:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.929 19:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.189 00:17:20.189 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:20.189 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:20.189 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.450 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.450 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.450 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.450 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.450 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.450 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:20.450 { 00:17:20.450 "cntlid": 9, 00:17:20.450 "qid": 0, 00:17:20.450 "state": "enabled", 00:17:20.450 "thread": "nvmf_tgt_poll_group_000", 00:17:20.450 "listen_address": { 00:17:20.450 "trtype": "TCP", 00:17:20.450 "adrfam": "IPv4", 00:17:20.450 "traddr": "10.0.0.2", 00:17:20.450 "trsvcid": "4420" 00:17:20.450 }, 00:17:20.450 "peer_address": { 00:17:20.450 "trtype": "TCP", 00:17:20.450 "adrfam": "IPv4", 00:17:20.450 "traddr": "10.0.0.1", 00:17:20.450 "trsvcid": "39450" 00:17:20.450 }, 00:17:20.450 "auth": { 00:17:20.450 "state": "completed", 00:17:20.450 "digest": "sha256", 00:17:20.450 "dhgroup": "ffdhe2048" 00:17:20.450 } 00:17:20.450 } 00:17:20.450 ]' 00:17:20.450 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:20.450 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:20.450 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:20.450 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:20.450 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:20.450 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.450 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.450 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.711 19:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzkxM2ZlYjk5ODA1ODVjODUxMmVmNjMxMGRhOWU3N2M5YzYyM2Q4Nzg1NTZlMTNk1WKVtA==: --dhchap-ctrl-secret DHHC-1:03:NGY5ODY0YjgwOTA5NzRmNTg2MjQwNjI3MjAzNjQ1MTE0YzcxNzA3MjNmOWVkM2FhZWRmZGFmYTQ4MWQxNzdhMg39dl8=: 00:17:21.653 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.653 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:21.653 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.653 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.653 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.653 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.653 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:21.653 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:21.653 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:17:21.653 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:21.653 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:21.653 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:21.653 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:21.653 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.653 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.653 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.653 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.653 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.653 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.653 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.914 00:17:21.914 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:21.914 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.914 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:21.914 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.914 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.914 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.914 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.914 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.914 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:21.914 { 00:17:21.914 "cntlid": 11, 00:17:21.914 "qid": 0, 00:17:21.914 "state": "enabled", 00:17:21.914 "thread": "nvmf_tgt_poll_group_000", 00:17:21.914 "listen_address": { 00:17:21.914 "trtype": "TCP", 00:17:21.914 "adrfam": "IPv4", 00:17:21.914 "traddr": "10.0.0.2", 00:17:21.914 "trsvcid": "4420" 00:17:21.914 }, 00:17:21.914 "peer_address": { 00:17:21.914 "trtype": "TCP", 00:17:21.914 "adrfam": "IPv4", 00:17:21.914 "traddr": "10.0.0.1", 00:17:21.914 "trsvcid": "39492" 00:17:21.914 }, 00:17:21.914 "auth": { 00:17:21.914 "state": "completed", 00:17:21.914 "digest": "sha256", 00:17:21.914 "dhgroup": "ffdhe2048" 00:17:21.914 } 00:17:21.914 } 00:17:21.914 ]' 00:17:21.914 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:22.175 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:22.175 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:22.175 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:22.175 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:22.175 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.175 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.175 19:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.435 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YmRhMjM3NjlhYjhjOGQxMjg3NTM0OGM2ZjMwMzcyYTbaloD7: --dhchap-ctrl-secret DHHC-1:02:ZjdjZmRkNDE3NzI1NzQyNDZkNDBiYzc4NmM2ZTM4NjY1MzE1MTc2NDU3MDdjZWM0oiehCA==: 00:17:23.009 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.009 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:23.009 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.009 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.009 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.009 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:23.009 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:23.009 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:23.009 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:17:23.009 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:23.009 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:23.009 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:23.009 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:23.009 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.009 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.009 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.009 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.009 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.009 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.010 19:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.270 00:17:23.270 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:23.270 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:23.270 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.531 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.531 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.531 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.531 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.532 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.532 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:23.532 { 00:17:23.532 "cntlid": 13, 00:17:23.532 "qid": 0, 00:17:23.532 "state": "enabled", 00:17:23.532 "thread": "nvmf_tgt_poll_group_000", 00:17:23.532 "listen_address": { 00:17:23.532 "trtype": "TCP", 00:17:23.532 "adrfam": "IPv4", 00:17:23.532 "traddr": "10.0.0.2", 00:17:23.532 "trsvcid": "4420" 00:17:23.532 }, 00:17:23.532 "peer_address": { 00:17:23.532 "trtype": "TCP", 00:17:23.532 "adrfam": "IPv4", 00:17:23.532 "traddr": "10.0.0.1", 00:17:23.532 "trsvcid": "39524" 00:17:23.532 }, 00:17:23.532 "auth": { 00:17:23.532 "state": "completed", 00:17:23.532 "digest": "sha256", 00:17:23.532 "dhgroup": "ffdhe2048" 00:17:23.532 } 00:17:23.532 } 00:17:23.532 ]' 00:17:23.532 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:23.532 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:23.532 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:23.532 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:23.532 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:23.532 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.532 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.532 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.792 19:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NTgxMzhmZDQ4OWRkZjBhNzRhZDY5ODNkMzUzMjA3ZmIwMTI4MjEzYmU4NTBlNGI3DWv9tw==: --dhchap-ctrl-secret DHHC-1:01:ZGRiNTM1YjAyZjFhOWQ4ZDg5ZGIwZjUwMTVkYjY4YjVdiXTr: 00:17:24.735 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.735 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:24.735 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.735 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.735 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.735 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:24.735 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:24.735 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:24.735 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:17:24.735 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:24.735 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:24.735 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:24.735 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:24.735 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.735 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:24.735 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.735 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.735 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.735 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:24.735 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:24.997 00:17:24.997 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:24.997 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.997 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:24.997 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.997 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.997 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.997 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.997 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.997 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:24.997 { 00:17:24.997 "cntlid": 15, 00:17:24.997 "qid": 0, 00:17:24.997 "state": "enabled", 00:17:24.997 "thread": "nvmf_tgt_poll_group_000", 00:17:24.997 "listen_address": { 00:17:24.997 "trtype": "TCP", 00:17:24.997 "adrfam": "IPv4", 00:17:24.997 "traddr": "10.0.0.2", 00:17:24.997 "trsvcid": "4420" 00:17:24.997 }, 00:17:24.997 "peer_address": { 00:17:24.997 "trtype": "TCP", 00:17:24.997 "adrfam": "IPv4", 00:17:24.997 "traddr": "10.0.0.1", 00:17:24.997 "trsvcid": "39562" 00:17:24.997 }, 00:17:24.997 "auth": { 00:17:24.997 "state": "completed", 00:17:24.997 "digest": "sha256", 00:17:24.997 "dhgroup": "ffdhe2048" 00:17:24.997 } 00:17:24.997 } 00:17:24.997 ]' 00:17:24.997 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:25.258 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:25.258 19:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:25.258 19:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:25.258 19:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:25.258 19:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.258 19:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.258 19:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.525 19:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzQ2OThjZWYxOTUzN2MxY2FlNWU3MmM4ZWJkMmVmYjFiOGYxODNkYjgyOTljMTNmYTQ3ZGRhZjZkOTgwMTFkMLyIbCg=: 00:17:26.099 19:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.099 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.099 19:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:26.099 19:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.099 19:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.099 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.099 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:26.099 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:26.099 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:26.099 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:26.360 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:17:26.360 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:26.360 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:26.360 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:26.360 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:26.360 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.360 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.360 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.360 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.360 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.360 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.360 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.621 00:17:26.621 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:26.621 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:26.621 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.882 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.882 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.882 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.882 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.882 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.882 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:26.882 { 00:17:26.882 "cntlid": 17, 00:17:26.882 "qid": 0, 00:17:26.882 "state": "enabled", 00:17:26.882 "thread": "nvmf_tgt_poll_group_000", 00:17:26.883 "listen_address": { 00:17:26.883 "trtype": "TCP", 00:17:26.883 "adrfam": "IPv4", 00:17:26.883 "traddr": "10.0.0.2", 00:17:26.883 "trsvcid": "4420" 00:17:26.883 }, 00:17:26.883 "peer_address": { 00:17:26.883 "trtype": "TCP", 00:17:26.883 "adrfam": "IPv4", 00:17:26.883 "traddr": "10.0.0.1", 00:17:26.883 "trsvcid": "55124" 00:17:26.883 }, 00:17:26.883 "auth": { 00:17:26.883 "state": "completed", 00:17:26.883 "digest": "sha256", 00:17:26.883 "dhgroup": "ffdhe3072" 00:17:26.883 } 00:17:26.883 } 00:17:26.883 ]' 00:17:26.883 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:26.883 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:26.883 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:26.883 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:26.883 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:26.883 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.883 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.883 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.143 19:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzkxM2ZlYjk5ODA1ODVjODUxMmVmNjMxMGRhOWU3N2M5YzYyM2Q4Nzg1NTZlMTNk1WKVtA==: --dhchap-ctrl-secret DHHC-1:03:NGY5ODY0YjgwOTA5NzRmNTg2MjQwNjI3MjAzNjQ1MTE0YzcxNzA3MjNmOWVkM2FhZWRmZGFmYTQ4MWQxNzdhMg39dl8=: 00:17:27.715 19:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.715 19:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:27.715 19:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.715 19:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.976 19:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.976 19:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:27.976 19:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:27.976 19:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:27.976 19:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:17:27.976 19:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:27.976 19:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:27.976 19:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:27.976 19:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:27.976 19:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.976 19:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.976 19:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.976 19:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.976 19:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.976 19:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.976 19:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.237 00:17:28.237 19:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:28.237 19:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:28.237 19:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.498 19:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.498 19:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.498 19:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.499 19:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.499 19:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.499 19:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:28.499 { 00:17:28.499 "cntlid": 19, 00:17:28.499 "qid": 0, 00:17:28.499 "state": "enabled", 00:17:28.499 "thread": "nvmf_tgt_poll_group_000", 00:17:28.499 "listen_address": { 00:17:28.499 "trtype": "TCP", 00:17:28.499 "adrfam": "IPv4", 00:17:28.499 "traddr": "10.0.0.2", 00:17:28.499 "trsvcid": "4420" 00:17:28.499 }, 00:17:28.499 "peer_address": { 00:17:28.499 "trtype": "TCP", 00:17:28.499 "adrfam": "IPv4", 00:17:28.499 "traddr": "10.0.0.1", 00:17:28.499 "trsvcid": "55152" 00:17:28.499 }, 00:17:28.499 "auth": { 00:17:28.499 "state": "completed", 00:17:28.499 "digest": "sha256", 00:17:28.499 "dhgroup": "ffdhe3072" 00:17:28.499 } 00:17:28.499 } 00:17:28.499 ]' 00:17:28.499 19:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:28.499 19:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:28.499 19:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:28.499 19:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:28.499 19:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:28.499 19:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.499 19:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.499 19:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.759 19:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YmRhMjM3NjlhYjhjOGQxMjg3NTM0OGM2ZjMwMzcyYTbaloD7: --dhchap-ctrl-secret DHHC-1:02:ZjdjZmRkNDE3NzI1NzQyNDZkNDBiYzc4NmM2ZTM4NjY1MzE1MTc2NDU3MDdjZWM0oiehCA==: 00:17:29.702 19:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.702 19:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:29.702 19:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.702 19:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.702 19:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.702 19:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:29.702 19:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:29.702 19:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:29.702 19:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:17:29.702 19:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.702 19:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:29.702 19:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:29.702 19:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:29.702 19:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.702 19:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.702 19:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.702 19:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.702 19:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.702 19:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.702 19:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.963 00:17:29.963 19:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:29.963 19:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:29.963 19:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.223 19:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.223 19:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.223 19:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.223 19:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.223 19:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.223 19:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:30.223 { 00:17:30.223 "cntlid": 21, 00:17:30.223 "qid": 0, 00:17:30.223 "state": "enabled", 00:17:30.223 "thread": "nvmf_tgt_poll_group_000", 00:17:30.223 "listen_address": { 00:17:30.223 "trtype": "TCP", 00:17:30.223 "adrfam": "IPv4", 00:17:30.223 "traddr": "10.0.0.2", 00:17:30.223 "trsvcid": "4420" 00:17:30.223 }, 00:17:30.223 "peer_address": { 00:17:30.223 "trtype": "TCP", 00:17:30.223 "adrfam": "IPv4", 00:17:30.223 "traddr": "10.0.0.1", 00:17:30.223 "trsvcid": "55198" 00:17:30.223 }, 00:17:30.223 "auth": { 00:17:30.223 "state": "completed", 00:17:30.223 "digest": "sha256", 00:17:30.223 "dhgroup": "ffdhe3072" 00:17:30.223 } 00:17:30.223 } 00:17:30.223 ]' 00:17:30.223 19:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:30.223 19:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:30.223 19:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:30.223 19:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:30.223 19:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:30.223 19:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.223 19:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.223 19:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.483 19:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NTgxMzhmZDQ4OWRkZjBhNzRhZDY5ODNkMzUzMjA3ZmIwMTI4MjEzYmU4NTBlNGI3DWv9tw==: --dhchap-ctrl-secret DHHC-1:01:ZGRiNTM1YjAyZjFhOWQ4ZDg5ZGIwZjUwMTVkYjY4YjVdiXTr: 00:17:31.083 19:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.083 19:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:31.083 19:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.083 19:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.084 19:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.084 19:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:31.084 19:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:31.084 19:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:31.344 19:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:17:31.344 19:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.344 19:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:31.344 19:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:31.344 19:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:31.344 19:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.344 19:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:31.344 19:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.344 19:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.344 19:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.344 19:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:31.344 19:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:31.605 00:17:31.605 19:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:31.605 19:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:31.605 19:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.866 19:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.866 19:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.866 19:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.866 19:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.866 19:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.866 19:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:31.866 { 00:17:31.866 "cntlid": 23, 00:17:31.866 "qid": 0, 00:17:31.866 "state": "enabled", 00:17:31.866 "thread": "nvmf_tgt_poll_group_000", 00:17:31.866 "listen_address": { 00:17:31.866 "trtype": "TCP", 00:17:31.866 "adrfam": "IPv4", 00:17:31.866 "traddr": "10.0.0.2", 00:17:31.866 "trsvcid": "4420" 00:17:31.866 }, 00:17:31.866 "peer_address": { 00:17:31.866 "trtype": "TCP", 00:17:31.866 "adrfam": "IPv4", 00:17:31.866 "traddr": "10.0.0.1", 00:17:31.866 "trsvcid": "55242" 00:17:31.866 }, 00:17:31.866 "auth": { 00:17:31.866 "state": "completed", 00:17:31.866 "digest": "sha256", 00:17:31.866 "dhgroup": "ffdhe3072" 00:17:31.866 } 00:17:31.866 } 00:17:31.866 ]' 00:17:31.866 19:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:31.866 19:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:31.866 19:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:31.866 19:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:31.866 19:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:31.866 19:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.866 19:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.866 19:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.127 19:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzQ2OThjZWYxOTUzN2MxY2FlNWU3MmM4ZWJkMmVmYjFiOGYxODNkYjgyOTljMTNmYTQ3ZGRhZjZkOTgwMTFkMLyIbCg=: 00:17:32.699 19:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.699 19:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:32.699 19:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.699 19:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.699 19:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.699 19:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:32.699 19:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:32.699 19:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:32.699 19:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:32.960 19:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:17:32.960 19:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:32.960 19:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:32.960 19:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:32.960 19:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:32.960 19:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.960 19:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.960 19:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.960 19:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.960 19:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.960 19:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.960 19:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.221 00:17:33.221 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:33.221 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.221 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:33.482 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.482 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.482 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.482 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.482 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.482 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:33.482 { 00:17:33.482 "cntlid": 25, 00:17:33.482 "qid": 0, 00:17:33.482 "state": "enabled", 00:17:33.482 "thread": "nvmf_tgt_poll_group_000", 00:17:33.482 "listen_address": { 00:17:33.482 "trtype": "TCP", 00:17:33.482 "adrfam": "IPv4", 00:17:33.482 "traddr": "10.0.0.2", 00:17:33.482 "trsvcid": "4420" 00:17:33.482 }, 00:17:33.482 "peer_address": { 00:17:33.482 "trtype": "TCP", 00:17:33.482 "adrfam": "IPv4", 00:17:33.482 "traddr": "10.0.0.1", 00:17:33.482 "trsvcid": "55276" 00:17:33.482 }, 00:17:33.482 "auth": { 00:17:33.482 "state": "completed", 00:17:33.482 "digest": "sha256", 00:17:33.482 "dhgroup": "ffdhe4096" 00:17:33.482 } 00:17:33.482 } 00:17:33.482 ]' 00:17:33.482 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:33.482 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:33.482 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:33.482 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:33.482 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:33.482 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.482 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.482 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.743 19:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzkxM2ZlYjk5ODA1ODVjODUxMmVmNjMxMGRhOWU3N2M5YzYyM2Q4Nzg1NTZlMTNk1WKVtA==: --dhchap-ctrl-secret DHHC-1:03:NGY5ODY0YjgwOTA5NzRmNTg2MjQwNjI3MjAzNjQ1MTE0YzcxNzA3MjNmOWVkM2FhZWRmZGFmYTQ4MWQxNzdhMg39dl8=: 00:17:34.315 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.315 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:34.315 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.315 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.315 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.315 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:34.315 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:34.315 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:34.576 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:17:34.576 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:34.576 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:34.576 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:34.576 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:34.576 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.576 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.576 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.576 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.576 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.576 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.576 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.836 00:17:34.836 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:34.836 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:34.836 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.096 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.096 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.096 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.096 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.096 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.096 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:35.096 { 00:17:35.096 "cntlid": 27, 00:17:35.096 "qid": 0, 00:17:35.096 "state": "enabled", 00:17:35.096 "thread": "nvmf_tgt_poll_group_000", 00:17:35.096 "listen_address": { 00:17:35.096 "trtype": "TCP", 00:17:35.096 "adrfam": "IPv4", 00:17:35.096 "traddr": "10.0.0.2", 00:17:35.096 "trsvcid": "4420" 00:17:35.096 }, 00:17:35.096 "peer_address": { 00:17:35.096 "trtype": "TCP", 00:17:35.096 "adrfam": "IPv4", 00:17:35.096 "traddr": "10.0.0.1", 00:17:35.096 "trsvcid": "55302" 00:17:35.096 }, 00:17:35.096 "auth": { 00:17:35.096 "state": "completed", 00:17:35.096 "digest": "sha256", 00:17:35.096 "dhgroup": "ffdhe4096" 00:17:35.096 } 00:17:35.096 } 00:17:35.096 ]' 00:17:35.096 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:35.096 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:35.096 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:35.096 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:35.096 19:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:35.096 19:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.096 19:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.096 19:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.357 19:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YmRhMjM3NjlhYjhjOGQxMjg3NTM0OGM2ZjMwMzcyYTbaloD7: --dhchap-ctrl-secret DHHC-1:02:ZjdjZmRkNDE3NzI1NzQyNDZkNDBiYzc4NmM2ZTM4NjY1MzE1MTc2NDU3MDdjZWM0oiehCA==: 00:17:36.303 19:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.303 19:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:36.303 19:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.303 19:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.303 19:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.303 19:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:36.303 19:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:36.303 19:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:36.303 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:17:36.303 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:36.303 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:36.303 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:36.303 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:36.303 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.303 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.303 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.303 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.303 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.303 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.303 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.563 00:17:36.563 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:36.563 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:36.563 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.823 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.823 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.823 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.823 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.823 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.823 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:36.823 { 00:17:36.823 "cntlid": 29, 00:17:36.823 "qid": 0, 00:17:36.823 "state": "enabled", 00:17:36.823 "thread": "nvmf_tgt_poll_group_000", 00:17:36.823 "listen_address": { 00:17:36.823 "trtype": "TCP", 00:17:36.823 "adrfam": "IPv4", 00:17:36.823 "traddr": "10.0.0.2", 00:17:36.823 "trsvcid": "4420" 00:17:36.823 }, 00:17:36.823 "peer_address": { 00:17:36.823 "trtype": "TCP", 00:17:36.823 "adrfam": "IPv4", 00:17:36.823 "traddr": "10.0.0.1", 00:17:36.823 "trsvcid": "44918" 00:17:36.823 }, 00:17:36.823 "auth": { 00:17:36.823 "state": "completed", 00:17:36.823 "digest": "sha256", 00:17:36.823 "dhgroup": "ffdhe4096" 00:17:36.823 } 00:17:36.823 } 00:17:36.823 ]' 00:17:36.823 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:36.823 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:36.823 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:36.823 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:36.823 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:36.823 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.823 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.823 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.084 19:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NTgxMzhmZDQ4OWRkZjBhNzRhZDY5ODNkMzUzMjA3ZmIwMTI4MjEzYmU4NTBlNGI3DWv9tw==: --dhchap-ctrl-secret DHHC-1:01:ZGRiNTM1YjAyZjFhOWQ4ZDg5ZGIwZjUwMTVkYjY4YjVdiXTr: 00:17:38.027 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.027 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:38.027 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.027 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.027 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.027 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:38.027 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:38.027 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:38.027 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:17:38.027 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:38.027 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:38.027 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:38.027 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:38.027 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.027 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:38.027 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.027 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.027 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.027 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:38.027 19:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:38.288 00:17:38.288 19:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:38.288 19:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.288 19:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:38.288 19:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.550 19:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.550 19:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.550 19:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.550 19:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.550 19:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:38.550 { 00:17:38.550 "cntlid": 31, 00:17:38.550 "qid": 0, 00:17:38.550 "state": "enabled", 00:17:38.550 "thread": "nvmf_tgt_poll_group_000", 00:17:38.550 "listen_address": { 00:17:38.550 "trtype": "TCP", 00:17:38.550 "adrfam": "IPv4", 00:17:38.550 "traddr": "10.0.0.2", 00:17:38.550 "trsvcid": "4420" 00:17:38.550 }, 00:17:38.550 "peer_address": { 00:17:38.550 "trtype": "TCP", 00:17:38.550 "adrfam": "IPv4", 00:17:38.550 "traddr": "10.0.0.1", 00:17:38.550 "trsvcid": "44934" 00:17:38.550 }, 00:17:38.550 "auth": { 00:17:38.550 "state": "completed", 00:17:38.550 "digest": "sha256", 00:17:38.550 "dhgroup": "ffdhe4096" 00:17:38.550 } 00:17:38.550 } 00:17:38.550 ]' 00:17:38.550 19:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:38.550 19:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:38.550 19:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:38.550 19:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:38.550 19:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:38.550 19:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.550 19:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.550 19:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.811 19:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzQ2OThjZWYxOTUzN2MxY2FlNWU3MmM4ZWJkMmVmYjFiOGYxODNkYjgyOTljMTNmYTQ3ZGRhZjZkOTgwMTFkMLyIbCg=: 00:17:39.383 19:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.383 19:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:39.383 19:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.383 19:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.383 19:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.383 19:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:39.383 19:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:39.383 19:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:39.383 19:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:39.645 19:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:17:39.645 19:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:39.645 19:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:39.645 19:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:39.645 19:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:39.645 19:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.645 19:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.645 19:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.645 19:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.645 19:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.645 19:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.645 19:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.906 00:17:39.906 19:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.906 19:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.906 19:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:40.168 19:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.168 19:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.168 19:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.168 19:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.168 19:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.168 19:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:40.168 { 00:17:40.168 "cntlid": 33, 00:17:40.168 "qid": 0, 00:17:40.168 "state": "enabled", 00:17:40.168 "thread": "nvmf_tgt_poll_group_000", 00:17:40.168 "listen_address": { 00:17:40.168 "trtype": "TCP", 00:17:40.168 "adrfam": "IPv4", 00:17:40.168 "traddr": "10.0.0.2", 00:17:40.168 "trsvcid": "4420" 00:17:40.168 }, 00:17:40.168 "peer_address": { 00:17:40.168 "trtype": "TCP", 00:17:40.168 "adrfam": "IPv4", 00:17:40.168 "traddr": "10.0.0.1", 00:17:40.168 "trsvcid": "44956" 00:17:40.168 }, 00:17:40.168 "auth": { 00:17:40.168 "state": "completed", 00:17:40.168 "digest": "sha256", 00:17:40.168 "dhgroup": "ffdhe6144" 00:17:40.168 } 00:17:40.168 } 00:17:40.168 ]' 00:17:40.168 19:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:40.168 19:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:40.168 19:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:40.168 19:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:40.168 19:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:40.430 19:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.430 19:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.430 19:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.430 19:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzkxM2ZlYjk5ODA1ODVjODUxMmVmNjMxMGRhOWU3N2M5YzYyM2Q4Nzg1NTZlMTNk1WKVtA==: --dhchap-ctrl-secret DHHC-1:03:NGY5ODY0YjgwOTA5NzRmNTg2MjQwNjI3MjAzNjQ1MTE0YzcxNzA3MjNmOWVkM2FhZWRmZGFmYTQ4MWQxNzdhMg39dl8=: 00:17:41.002 19:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.002 19:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:41.002 19:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.002 19:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.264 19:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.264 19:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:41.264 19:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:41.264 19:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:41.264 19:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:17:41.264 19:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:41.264 19:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:41.264 19:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:41.264 19:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:41.264 19:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.264 19:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.264 19:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.264 19:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.264 19:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.264 19:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.264 19:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.525 00:17:41.786 19:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:41.786 19:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:41.786 19:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.786 19:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.786 19:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.786 19:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.786 19:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.786 19:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.786 19:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:41.786 { 00:17:41.786 "cntlid": 35, 00:17:41.786 "qid": 0, 00:17:41.786 "state": "enabled", 00:17:41.786 "thread": "nvmf_tgt_poll_group_000", 00:17:41.786 "listen_address": { 00:17:41.786 "trtype": "TCP", 00:17:41.786 "adrfam": "IPv4", 00:17:41.786 "traddr": "10.0.0.2", 00:17:41.786 "trsvcid": "4420" 00:17:41.786 }, 00:17:41.786 "peer_address": { 00:17:41.786 "trtype": "TCP", 00:17:41.786 "adrfam": "IPv4", 00:17:41.786 "traddr": "10.0.0.1", 00:17:41.786 "trsvcid": "44990" 00:17:41.786 }, 00:17:41.786 "auth": { 00:17:41.786 "state": "completed", 00:17:41.786 "digest": "sha256", 00:17:41.786 "dhgroup": "ffdhe6144" 00:17:41.786 } 00:17:41.786 } 00:17:41.786 ]' 00:17:41.786 19:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:41.786 19:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:41.786 19:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:42.048 19:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:42.048 19:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:42.048 19:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.048 19:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.048 19:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.048 19:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YmRhMjM3NjlhYjhjOGQxMjg3NTM0OGM2ZjMwMzcyYTbaloD7: --dhchap-ctrl-secret DHHC-1:02:ZjdjZmRkNDE3NzI1NzQyNDZkNDBiYzc4NmM2ZTM4NjY1MzE1MTc2NDU3MDdjZWM0oiehCA==: 00:17:42.990 19:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.990 19:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:42.990 19:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.990 19:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.990 19:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.990 19:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:42.990 19:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:42.990 19:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:42.990 19:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:17:42.990 19:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:42.990 19:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:42.990 19:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:42.990 19:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:42.990 19:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.990 19:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.990 19:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.990 19:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.990 19:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.990 19:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.990 19:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.252 00:17:43.252 19:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:43.252 19:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:43.252 19:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.513 19:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.513 19:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.513 19:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.513 19:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.513 19:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.513 19:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:43.513 { 00:17:43.513 "cntlid": 37, 00:17:43.513 "qid": 0, 00:17:43.513 "state": "enabled", 00:17:43.513 "thread": "nvmf_tgt_poll_group_000", 00:17:43.513 "listen_address": { 00:17:43.513 "trtype": "TCP", 00:17:43.513 "adrfam": "IPv4", 00:17:43.513 "traddr": "10.0.0.2", 00:17:43.513 "trsvcid": "4420" 00:17:43.513 }, 00:17:43.513 "peer_address": { 00:17:43.513 "trtype": "TCP", 00:17:43.514 "adrfam": "IPv4", 00:17:43.514 "traddr": "10.0.0.1", 00:17:43.514 "trsvcid": "45012" 00:17:43.514 }, 00:17:43.514 "auth": { 00:17:43.514 "state": "completed", 00:17:43.514 "digest": "sha256", 00:17:43.514 "dhgroup": "ffdhe6144" 00:17:43.514 } 00:17:43.514 } 00:17:43.514 ]' 00:17:43.514 19:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:43.514 19:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:43.514 19:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:43.514 19:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:43.514 19:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:43.775 19:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.775 19:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.775 19:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.775 19:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NTgxMzhmZDQ4OWRkZjBhNzRhZDY5ODNkMzUzMjA3ZmIwMTI4MjEzYmU4NTBlNGI3DWv9tw==: --dhchap-ctrl-secret DHHC-1:01:ZGRiNTM1YjAyZjFhOWQ4ZDg5ZGIwZjUwMTVkYjY4YjVdiXTr: 00:17:44.720 19:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.720 19:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:44.720 19:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.720 19:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.720 19:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.720 19:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:44.720 19:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:44.720 19:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:44.720 19:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:17:44.720 19:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:44.720 19:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:44.720 19:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:44.720 19:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:44.720 19:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.720 19:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:44.720 19:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.720 19:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.720 19:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.720 19:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:44.720 19:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:44.981 00:17:44.981 19:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:44.981 19:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:44.981 19:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.243 19:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.243 19:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.243 19:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.243 19:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.243 19:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.243 19:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:45.243 { 00:17:45.243 "cntlid": 39, 00:17:45.243 "qid": 0, 00:17:45.243 "state": "enabled", 00:17:45.243 "thread": "nvmf_tgt_poll_group_000", 00:17:45.243 "listen_address": { 00:17:45.243 "trtype": "TCP", 00:17:45.243 "adrfam": "IPv4", 00:17:45.243 "traddr": "10.0.0.2", 00:17:45.243 "trsvcid": "4420" 00:17:45.243 }, 00:17:45.243 "peer_address": { 00:17:45.243 "trtype": "TCP", 00:17:45.243 "adrfam": "IPv4", 00:17:45.243 "traddr": "10.0.0.1", 00:17:45.243 "trsvcid": "45028" 00:17:45.243 }, 00:17:45.243 "auth": { 00:17:45.243 "state": "completed", 00:17:45.243 "digest": "sha256", 00:17:45.243 "dhgroup": "ffdhe6144" 00:17:45.243 } 00:17:45.243 } 00:17:45.243 ]' 00:17:45.243 19:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:45.243 19:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:45.243 19:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:45.243 19:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:45.243 19:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:45.243 19:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.243 19:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.243 19:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.505 19:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzQ2OThjZWYxOTUzN2MxY2FlNWU3MmM4ZWJkMmVmYjFiOGYxODNkYjgyOTljMTNmYTQ3ZGRhZjZkOTgwMTFkMLyIbCg=: 00:17:46.490 19:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.490 19:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:46.490 19:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.490 19:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.490 19:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.490 19:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:46.490 19:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:46.490 19:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:46.490 19:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:46.490 19:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:17:46.490 19:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:46.490 19:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:46.490 19:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:46.490 19:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:46.490 19:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.490 19:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.490 19:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.490 19:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.490 19:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.490 19:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.490 19:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.062 00:17:47.062 19:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:47.062 19:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:47.062 19:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.062 19:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.062 19:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.062 19:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.062 19:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.062 19:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.062 19:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:47.062 { 00:17:47.062 "cntlid": 41, 00:17:47.062 "qid": 0, 00:17:47.062 "state": "enabled", 00:17:47.062 "thread": "nvmf_tgt_poll_group_000", 00:17:47.062 "listen_address": { 00:17:47.062 "trtype": "TCP", 00:17:47.062 "adrfam": "IPv4", 00:17:47.062 "traddr": "10.0.0.2", 00:17:47.062 "trsvcid": "4420" 00:17:47.062 }, 00:17:47.062 "peer_address": { 00:17:47.062 "trtype": "TCP", 00:17:47.062 "adrfam": "IPv4", 00:17:47.062 "traddr": "10.0.0.1", 00:17:47.062 "trsvcid": "39990" 00:17:47.062 }, 00:17:47.062 "auth": { 00:17:47.062 "state": "completed", 00:17:47.062 "digest": "sha256", 00:17:47.062 "dhgroup": "ffdhe8192" 00:17:47.062 } 00:17:47.062 } 00:17:47.062 ]' 00:17:47.062 19:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:47.323 19:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:47.323 19:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:47.323 19:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:47.323 19:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:47.323 19:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.323 19:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.323 19:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.584 19:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzkxM2ZlYjk5ODA1ODVjODUxMmVmNjMxMGRhOWU3N2M5YzYyM2Q4Nzg1NTZlMTNk1WKVtA==: --dhchap-ctrl-secret DHHC-1:03:NGY5ODY0YjgwOTA5NzRmNTg2MjQwNjI3MjAzNjQ1MTE0YzcxNzA3MjNmOWVkM2FhZWRmZGFmYTQ4MWQxNzdhMg39dl8=: 00:17:48.156 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.156 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:48.156 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.156 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.156 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.156 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:48.156 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:48.156 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:48.417 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:17:48.417 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:48.417 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:48.417 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:48.417 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:48.417 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.417 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.417 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.417 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.417 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.417 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.417 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.990 00:17:48.990 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:48.990 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:48.990 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.990 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.990 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.990 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.990 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.990 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.990 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:48.990 { 00:17:48.990 "cntlid": 43, 00:17:48.990 "qid": 0, 00:17:48.990 "state": "enabled", 00:17:48.990 "thread": "nvmf_tgt_poll_group_000", 00:17:48.990 "listen_address": { 00:17:48.990 "trtype": "TCP", 00:17:48.990 "adrfam": "IPv4", 00:17:48.990 "traddr": "10.0.0.2", 00:17:48.990 "trsvcid": "4420" 00:17:48.990 }, 00:17:48.990 "peer_address": { 00:17:48.990 "trtype": "TCP", 00:17:48.990 "adrfam": "IPv4", 00:17:48.990 "traddr": "10.0.0.1", 00:17:48.990 "trsvcid": "40020" 00:17:48.990 }, 00:17:48.990 "auth": { 00:17:48.990 "state": "completed", 00:17:48.990 "digest": "sha256", 00:17:48.990 "dhgroup": "ffdhe8192" 00:17:48.990 } 00:17:48.990 } 00:17:48.990 ]' 00:17:48.990 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:49.251 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:49.251 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:49.251 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:49.251 19:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:49.251 19:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.251 19:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.251 19:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.251 19:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YmRhMjM3NjlhYjhjOGQxMjg3NTM0OGM2ZjMwMzcyYTbaloD7: --dhchap-ctrl-secret DHHC-1:02:ZjdjZmRkNDE3NzI1NzQyNDZkNDBiYzc4NmM2ZTM4NjY1MzE1MTc2NDU3MDdjZWM0oiehCA==: 00:17:50.194 19:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.194 19:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:50.194 19:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.194 19:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.194 19:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.194 19:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:50.194 19:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:50.194 19:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:50.194 19:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:17:50.194 19:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.194 19:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:50.194 19:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:50.194 19:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:50.194 19:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.194 19:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.194 19:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.194 19:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.194 19:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.194 19:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.194 19:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.766 00:17:50.766 19:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.766 19:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.766 19:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.027 19:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.027 19:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.027 19:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.027 19:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.027 19:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.027 19:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:51.027 { 00:17:51.027 "cntlid": 45, 00:17:51.027 "qid": 0, 00:17:51.027 "state": "enabled", 00:17:51.027 "thread": "nvmf_tgt_poll_group_000", 00:17:51.027 "listen_address": { 00:17:51.027 "trtype": "TCP", 00:17:51.027 "adrfam": "IPv4", 00:17:51.027 "traddr": "10.0.0.2", 00:17:51.027 "trsvcid": "4420" 00:17:51.027 }, 00:17:51.027 "peer_address": { 00:17:51.027 "trtype": "TCP", 00:17:51.027 "adrfam": "IPv4", 00:17:51.027 "traddr": "10.0.0.1", 00:17:51.027 "trsvcid": "40046" 00:17:51.027 }, 00:17:51.027 "auth": { 00:17:51.027 "state": "completed", 00:17:51.027 "digest": "sha256", 00:17:51.027 "dhgroup": "ffdhe8192" 00:17:51.027 } 00:17:51.027 } 00:17:51.027 ]' 00:17:51.027 19:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:51.027 19:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:51.027 19:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:51.027 19:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:51.028 19:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:51.288 19:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.288 19:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.288 19:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.288 19:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NTgxMzhmZDQ4OWRkZjBhNzRhZDY5ODNkMzUzMjA3ZmIwMTI4MjEzYmU4NTBlNGI3DWv9tw==: --dhchap-ctrl-secret DHHC-1:01:ZGRiNTM1YjAyZjFhOWQ4ZDg5ZGIwZjUwMTVkYjY4YjVdiXTr: 00:17:52.232 19:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.232 19:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:52.232 19:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.232 19:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.232 19:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.232 19:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:52.232 19:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:52.232 19:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:52.232 19:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:17:52.232 19:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:52.232 19:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:52.232 19:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:52.232 19:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:52.232 19:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.232 19:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:52.232 19:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.232 19:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.232 19:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.232 19:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:52.232 19:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:52.804 00:17:52.804 19:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:52.804 19:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:52.804 19:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.065 19:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.065 19:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.065 19:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.065 19:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.065 19:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.065 19:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:53.065 { 00:17:53.065 "cntlid": 47, 00:17:53.065 "qid": 0, 00:17:53.065 "state": "enabled", 00:17:53.065 "thread": "nvmf_tgt_poll_group_000", 00:17:53.065 "listen_address": { 00:17:53.065 "trtype": "TCP", 00:17:53.065 "adrfam": "IPv4", 00:17:53.065 "traddr": "10.0.0.2", 00:17:53.065 "trsvcid": "4420" 00:17:53.065 }, 00:17:53.065 "peer_address": { 00:17:53.065 "trtype": "TCP", 00:17:53.065 "adrfam": "IPv4", 00:17:53.065 "traddr": "10.0.0.1", 00:17:53.065 "trsvcid": "40068" 00:17:53.065 }, 00:17:53.065 "auth": { 00:17:53.065 "state": "completed", 00:17:53.065 "digest": "sha256", 00:17:53.065 "dhgroup": "ffdhe8192" 00:17:53.065 } 00:17:53.065 } 00:17:53.065 ]' 00:17:53.065 19:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:53.065 19:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:53.065 19:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:53.065 19:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:53.065 19:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:53.065 19:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.065 19:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.065 19:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.326 19:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzQ2OThjZWYxOTUzN2MxY2FlNWU3MmM4ZWJkMmVmYjFiOGYxODNkYjgyOTljMTNmYTQ3ZGRhZjZkOTgwMTFkMLyIbCg=: 00:17:53.898 19:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.899 19:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:53.899 19:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.899 19:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.160 19:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.160 19:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:54.160 19:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:54.160 19:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:54.160 19:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:54.160 19:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:54.160 19:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:17:54.160 19:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.160 19:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:54.160 19:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:54.160 19:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:54.160 19:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.160 19:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.160 19:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.160 19:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.160 19:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.160 19:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.160 19:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.421 00:17:54.421 19:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.421 19:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.421 19:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.682 19:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.682 19:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.682 19:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.682 19:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.682 19:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.682 19:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.682 { 00:17:54.682 "cntlid": 49, 00:17:54.682 "qid": 0, 00:17:54.682 "state": "enabled", 00:17:54.682 "thread": "nvmf_tgt_poll_group_000", 00:17:54.682 "listen_address": { 00:17:54.682 "trtype": "TCP", 00:17:54.682 "adrfam": "IPv4", 00:17:54.682 "traddr": "10.0.0.2", 00:17:54.682 "trsvcid": "4420" 00:17:54.682 }, 00:17:54.682 "peer_address": { 00:17:54.682 "trtype": "TCP", 00:17:54.682 "adrfam": "IPv4", 00:17:54.682 "traddr": "10.0.0.1", 00:17:54.682 "trsvcid": "40106" 00:17:54.682 }, 00:17:54.682 "auth": { 00:17:54.682 "state": "completed", 00:17:54.682 "digest": "sha384", 00:17:54.682 "dhgroup": "null" 00:17:54.682 } 00:17:54.682 } 00:17:54.682 ]' 00:17:54.682 19:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.682 19:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:54.682 19:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.682 19:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:54.682 19:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.682 19:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.682 19:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.682 19:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.943 19:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzkxM2ZlYjk5ODA1ODVjODUxMmVmNjMxMGRhOWU3N2M5YzYyM2Q4Nzg1NTZlMTNk1WKVtA==: --dhchap-ctrl-secret DHHC-1:03:NGY5ODY0YjgwOTA5NzRmNTg2MjQwNjI3MjAzNjQ1MTE0YzcxNzA3MjNmOWVkM2FhZWRmZGFmYTQ4MWQxNzdhMg39dl8=: 00:17:55.515 19:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.776 19:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:55.776 19:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.776 19:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.776 19:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.776 19:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.776 19:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:55.776 19:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:55.776 19:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:17:55.776 19:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:55.776 19:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:55.776 19:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:55.776 19:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:55.776 19:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.776 19:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.776 19:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.776 19:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.776 19:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.776 19:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.776 19:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.037 00:17:56.037 19:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.037 19:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.037 19:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.298 19:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.298 19:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.298 19:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.298 19:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.298 19:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.298 19:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.298 { 00:17:56.298 "cntlid": 51, 00:17:56.298 "qid": 0, 00:17:56.298 "state": "enabled", 00:17:56.298 "thread": "nvmf_tgt_poll_group_000", 00:17:56.298 "listen_address": { 00:17:56.298 "trtype": "TCP", 00:17:56.298 "adrfam": "IPv4", 00:17:56.298 "traddr": "10.0.0.2", 00:17:56.298 "trsvcid": "4420" 00:17:56.298 }, 00:17:56.298 "peer_address": { 00:17:56.298 "trtype": "TCP", 00:17:56.298 "adrfam": "IPv4", 00:17:56.298 "traddr": "10.0.0.1", 00:17:56.298 "trsvcid": "57506" 00:17:56.298 }, 00:17:56.298 "auth": { 00:17:56.298 "state": "completed", 00:17:56.298 "digest": "sha384", 00:17:56.298 "dhgroup": "null" 00:17:56.298 } 00:17:56.298 } 00:17:56.298 ]' 00:17:56.298 19:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.298 19:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:56.298 19:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.298 19:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:56.298 19:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.298 19:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.298 19:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.298 19:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.558 19:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YmRhMjM3NjlhYjhjOGQxMjg3NTM0OGM2ZjMwMzcyYTbaloD7: --dhchap-ctrl-secret DHHC-1:02:ZjdjZmRkNDE3NzI1NzQyNDZkNDBiYzc4NmM2ZTM4NjY1MzE1MTc2NDU3MDdjZWM0oiehCA==: 00:17:57.500 19:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.500 19:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:57.500 19:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.500 19:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.500 19:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.500 19:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:57.500 19:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:57.500 19:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:57.500 19:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:17:57.500 19:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.500 19:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:57.500 19:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:57.500 19:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:57.500 19:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.500 19:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.500 19:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.500 19:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.500 19:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.500 19:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.500 19:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.761 00:17:57.761 19:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:57.762 19:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:57.762 19:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.024 19:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.024 19:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.024 19:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.024 19:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.024 19:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.024 19:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.024 { 00:17:58.024 "cntlid": 53, 00:17:58.024 "qid": 0, 00:17:58.024 "state": "enabled", 00:17:58.024 "thread": "nvmf_tgt_poll_group_000", 00:17:58.024 "listen_address": { 00:17:58.024 "trtype": "TCP", 00:17:58.024 "adrfam": "IPv4", 00:17:58.024 "traddr": "10.0.0.2", 00:17:58.024 "trsvcid": "4420" 00:17:58.024 }, 00:17:58.024 "peer_address": { 00:17:58.024 "trtype": "TCP", 00:17:58.024 "adrfam": "IPv4", 00:17:58.024 "traddr": "10.0.0.1", 00:17:58.024 "trsvcid": "57540" 00:17:58.024 }, 00:17:58.024 "auth": { 00:17:58.024 "state": "completed", 00:17:58.024 "digest": "sha384", 00:17:58.024 "dhgroup": "null" 00:17:58.024 } 00:17:58.024 } 00:17:58.024 ]' 00:17:58.024 19:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.024 19:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:58.024 19:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.024 19:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:58.024 19:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.024 19:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.024 19:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.024 19:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.285 19:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NTgxMzhmZDQ4OWRkZjBhNzRhZDY5ODNkMzUzMjA3ZmIwMTI4MjEzYmU4NTBlNGI3DWv9tw==: --dhchap-ctrl-secret DHHC-1:01:ZGRiNTM1YjAyZjFhOWQ4ZDg5ZGIwZjUwMTVkYjY4YjVdiXTr: 00:17:58.856 19:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.856 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.856 19:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:58.856 19:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.856 19:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.856 19:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.856 19:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:58.856 19:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:58.856 19:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:59.117 19:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:17:59.117 19:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.117 19:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:59.117 19:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:59.117 19:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:59.117 19:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.117 19:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:59.117 19:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.117 19:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.117 19:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.117 19:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:59.117 19:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:59.378 00:17:59.378 19:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.378 19:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.378 19:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.640 19:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.640 19:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.640 19:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.640 19:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.640 19:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.640 19:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.640 { 00:17:59.640 "cntlid": 55, 00:17:59.640 "qid": 0, 00:17:59.640 "state": "enabled", 00:17:59.640 "thread": "nvmf_tgt_poll_group_000", 00:17:59.640 "listen_address": { 00:17:59.640 "trtype": "TCP", 00:17:59.640 "adrfam": "IPv4", 00:17:59.640 "traddr": "10.0.0.2", 00:17:59.640 "trsvcid": "4420" 00:17:59.640 }, 00:17:59.640 "peer_address": { 00:17:59.640 "trtype": "TCP", 00:17:59.640 "adrfam": "IPv4", 00:17:59.640 "traddr": "10.0.0.1", 00:17:59.640 "trsvcid": "57572" 00:17:59.640 }, 00:17:59.640 "auth": { 00:17:59.640 "state": "completed", 00:17:59.640 "digest": "sha384", 00:17:59.640 "dhgroup": "null" 00:17:59.640 } 00:17:59.640 } 00:17:59.640 ]' 00:17:59.640 19:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.640 19:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:59.640 19:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.640 19:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:59.640 19:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:59.640 19:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.640 19:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.640 19:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.902 19:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzQ2OThjZWYxOTUzN2MxY2FlNWU3MmM4ZWJkMmVmYjFiOGYxODNkYjgyOTljMTNmYTQ3ZGRhZjZkOTgwMTFkMLyIbCg=: 00:18:00.885 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.885 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:00.885 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.885 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.885 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.885 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:00.885 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.885 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:00.885 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:00.885 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:18:00.885 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.885 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:00.885 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:00.885 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:00.885 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.885 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.885 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.885 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.885 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.885 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.885 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.148 00:18:01.148 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.148 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:01.148 19:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.148 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.148 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.148 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.148 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.148 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.148 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:01.148 { 00:18:01.148 "cntlid": 57, 00:18:01.148 "qid": 0, 00:18:01.148 "state": "enabled", 00:18:01.148 "thread": "nvmf_tgt_poll_group_000", 00:18:01.148 "listen_address": { 00:18:01.148 "trtype": "TCP", 00:18:01.148 "adrfam": "IPv4", 00:18:01.148 "traddr": "10.0.0.2", 00:18:01.148 "trsvcid": "4420" 00:18:01.148 }, 00:18:01.148 "peer_address": { 00:18:01.148 "trtype": "TCP", 00:18:01.148 "adrfam": "IPv4", 00:18:01.148 "traddr": "10.0.0.1", 00:18:01.148 "trsvcid": "57586" 00:18:01.148 }, 00:18:01.148 "auth": { 00:18:01.148 "state": "completed", 00:18:01.148 "digest": "sha384", 00:18:01.148 "dhgroup": "ffdhe2048" 00:18:01.148 } 00:18:01.148 } 00:18:01.148 ]' 00:18:01.148 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:01.148 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:01.148 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:01.409 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:01.409 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:01.410 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.410 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.410 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.410 19:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzkxM2ZlYjk5ODA1ODVjODUxMmVmNjMxMGRhOWU3N2M5YzYyM2Q4Nzg1NTZlMTNk1WKVtA==: --dhchap-ctrl-secret DHHC-1:03:NGY5ODY0YjgwOTA5NzRmNTg2MjQwNjI3MjAzNjQ1MTE0YzcxNzA3MjNmOWVkM2FhZWRmZGFmYTQ4MWQxNzdhMg39dl8=: 00:18:02.354 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.354 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:02.354 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.354 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.354 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.354 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:02.354 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:02.354 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:02.354 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:18:02.354 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.354 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:02.354 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:02.354 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:02.354 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.354 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.354 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.354 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.354 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.354 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.354 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.615 00:18:02.615 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.615 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.615 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.876 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.876 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.876 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.876 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.876 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.876 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.876 { 00:18:02.876 "cntlid": 59, 00:18:02.876 "qid": 0, 00:18:02.876 "state": "enabled", 00:18:02.876 "thread": "nvmf_tgt_poll_group_000", 00:18:02.876 "listen_address": { 00:18:02.876 "trtype": "TCP", 00:18:02.876 "adrfam": "IPv4", 00:18:02.876 "traddr": "10.0.0.2", 00:18:02.876 "trsvcid": "4420" 00:18:02.876 }, 00:18:02.876 "peer_address": { 00:18:02.876 "trtype": "TCP", 00:18:02.876 "adrfam": "IPv4", 00:18:02.876 "traddr": "10.0.0.1", 00:18:02.876 "trsvcid": "57604" 00:18:02.876 }, 00:18:02.876 "auth": { 00:18:02.876 "state": "completed", 00:18:02.876 "digest": "sha384", 00:18:02.876 "dhgroup": "ffdhe2048" 00:18:02.876 } 00:18:02.876 } 00:18:02.876 ]' 00:18:02.876 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.876 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:02.876 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.876 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:02.876 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.876 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.876 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.876 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.137 19:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YmRhMjM3NjlhYjhjOGQxMjg3NTM0OGM2ZjMwMzcyYTbaloD7: --dhchap-ctrl-secret DHHC-1:02:ZjdjZmRkNDE3NzI1NzQyNDZkNDBiYzc4NmM2ZTM4NjY1MzE1MTc2NDU3MDdjZWM0oiehCA==: 00:18:04.080 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.080 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:04.080 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.080 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.080 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.080 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:04.080 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:04.080 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:04.080 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:18:04.080 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:04.080 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:04.080 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:04.080 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:04.080 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.080 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.080 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.080 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.080 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.080 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.080 19:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.342 00:18:04.342 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.342 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.342 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.604 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.604 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.604 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.604 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.604 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.604 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.604 { 00:18:04.604 "cntlid": 61, 00:18:04.604 "qid": 0, 00:18:04.604 "state": "enabled", 00:18:04.604 "thread": "nvmf_tgt_poll_group_000", 00:18:04.604 "listen_address": { 00:18:04.604 "trtype": "TCP", 00:18:04.604 "adrfam": "IPv4", 00:18:04.604 "traddr": "10.0.0.2", 00:18:04.604 "trsvcid": "4420" 00:18:04.604 }, 00:18:04.604 "peer_address": { 00:18:04.604 "trtype": "TCP", 00:18:04.604 "adrfam": "IPv4", 00:18:04.604 "traddr": "10.0.0.1", 00:18:04.604 "trsvcid": "57616" 00:18:04.604 }, 00:18:04.604 "auth": { 00:18:04.604 "state": "completed", 00:18:04.604 "digest": "sha384", 00:18:04.604 "dhgroup": "ffdhe2048" 00:18:04.604 } 00:18:04.604 } 00:18:04.604 ]' 00:18:04.604 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.604 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:04.604 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.604 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:04.604 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.604 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.604 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.604 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.865 19:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NTgxMzhmZDQ4OWRkZjBhNzRhZDY5ODNkMzUzMjA3ZmIwMTI4MjEzYmU4NTBlNGI3DWv9tw==: --dhchap-ctrl-secret DHHC-1:01:ZGRiNTM1YjAyZjFhOWQ4ZDg5ZGIwZjUwMTVkYjY4YjVdiXTr: 00:18:05.438 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.438 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:05.438 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.438 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.438 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.438 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.438 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:05.438 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:05.699 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:18:05.699 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.699 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:05.699 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:05.699 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:05.699 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.699 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:05.699 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.699 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.699 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.699 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:05.699 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:05.960 00:18:05.960 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.960 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.960 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.222 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.222 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.222 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.222 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.222 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.222 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.222 { 00:18:06.222 "cntlid": 63, 00:18:06.222 "qid": 0, 00:18:06.222 "state": "enabled", 00:18:06.222 "thread": "nvmf_tgt_poll_group_000", 00:18:06.222 "listen_address": { 00:18:06.222 "trtype": "TCP", 00:18:06.222 "adrfam": "IPv4", 00:18:06.222 "traddr": "10.0.0.2", 00:18:06.222 "trsvcid": "4420" 00:18:06.222 }, 00:18:06.222 "peer_address": { 00:18:06.222 "trtype": "TCP", 00:18:06.222 "adrfam": "IPv4", 00:18:06.222 "traddr": "10.0.0.1", 00:18:06.222 "trsvcid": "57656" 00:18:06.222 }, 00:18:06.222 "auth": { 00:18:06.222 "state": "completed", 00:18:06.222 "digest": "sha384", 00:18:06.222 "dhgroup": "ffdhe2048" 00:18:06.222 } 00:18:06.222 } 00:18:06.222 ]' 00:18:06.222 19:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.222 19:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:06.222 19:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:06.222 19:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:06.222 19:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.222 19:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.222 19:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.222 19:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.483 19:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzQ2OThjZWYxOTUzN2MxY2FlNWU3MmM4ZWJkMmVmYjFiOGYxODNkYjgyOTljMTNmYTQ3ZGRhZjZkOTgwMTFkMLyIbCg=: 00:18:07.056 19:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.056 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.056 19:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:07.056 19:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.056 19:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.056 19:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.056 19:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:07.056 19:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:07.056 19:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:07.056 19:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:07.317 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:18:07.317 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.317 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:07.317 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:07.317 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:07.317 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.317 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.317 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.317 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.317 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.317 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.317 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.578 00:18:07.578 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.578 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.578 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.839 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.839 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.839 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.839 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.839 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.839 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:07.839 { 00:18:07.839 "cntlid": 65, 00:18:07.839 "qid": 0, 00:18:07.839 "state": "enabled", 00:18:07.839 "thread": "nvmf_tgt_poll_group_000", 00:18:07.839 "listen_address": { 00:18:07.839 "trtype": "TCP", 00:18:07.839 "adrfam": "IPv4", 00:18:07.839 "traddr": "10.0.0.2", 00:18:07.839 "trsvcid": "4420" 00:18:07.839 }, 00:18:07.839 "peer_address": { 00:18:07.839 "trtype": "TCP", 00:18:07.839 "adrfam": "IPv4", 00:18:07.839 "traddr": "10.0.0.1", 00:18:07.839 "trsvcid": "37270" 00:18:07.839 }, 00:18:07.839 "auth": { 00:18:07.839 "state": "completed", 00:18:07.839 "digest": "sha384", 00:18:07.839 "dhgroup": "ffdhe3072" 00:18:07.839 } 00:18:07.839 } 00:18:07.839 ]' 00:18:07.839 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:07.839 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:07.839 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.839 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:07.839 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:07.839 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.839 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.839 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.102 19:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzkxM2ZlYjk5ODA1ODVjODUxMmVmNjMxMGRhOWU3N2M5YzYyM2Q4Nzg1NTZlMTNk1WKVtA==: --dhchap-ctrl-secret DHHC-1:03:NGY5ODY0YjgwOTA5NzRmNTg2MjQwNjI3MjAzNjQ1MTE0YzcxNzA3MjNmOWVkM2FhZWRmZGFmYTQ4MWQxNzdhMg39dl8=: 00:18:08.674 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.935 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:08.935 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.935 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.935 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.935 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.935 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:08.935 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:08.935 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:18:08.935 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.935 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:08.935 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:08.935 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:08.935 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.935 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.935 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.935 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.935 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.935 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.935 19:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.195 00:18:09.196 19:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.196 19:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.196 19:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.456 19:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.456 19:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.456 19:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.456 19:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.456 19:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.456 19:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:09.456 { 00:18:09.456 "cntlid": 67, 00:18:09.456 "qid": 0, 00:18:09.456 "state": "enabled", 00:18:09.456 "thread": "nvmf_tgt_poll_group_000", 00:18:09.456 "listen_address": { 00:18:09.456 "trtype": "TCP", 00:18:09.456 "adrfam": "IPv4", 00:18:09.456 "traddr": "10.0.0.2", 00:18:09.456 "trsvcid": "4420" 00:18:09.456 }, 00:18:09.456 "peer_address": { 00:18:09.456 "trtype": "TCP", 00:18:09.456 "adrfam": "IPv4", 00:18:09.456 "traddr": "10.0.0.1", 00:18:09.456 "trsvcid": "37304" 00:18:09.456 }, 00:18:09.456 "auth": { 00:18:09.456 "state": "completed", 00:18:09.456 "digest": "sha384", 00:18:09.456 "dhgroup": "ffdhe3072" 00:18:09.456 } 00:18:09.456 } 00:18:09.456 ]' 00:18:09.456 19:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:09.456 19:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:09.456 19:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:09.456 19:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:09.456 19:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:09.456 19:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.456 19:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.456 19:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.717 19:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YmRhMjM3NjlhYjhjOGQxMjg3NTM0OGM2ZjMwMzcyYTbaloD7: --dhchap-ctrl-secret DHHC-1:02:ZjdjZmRkNDE3NzI1NzQyNDZkNDBiYzc4NmM2ZTM4NjY1MzE1MTc2NDU3MDdjZWM0oiehCA==: 00:18:10.288 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.550 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:10.550 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.550 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.550 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.550 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:10.550 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:10.550 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:10.550 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:18:10.550 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:10.550 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:10.550 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:10.550 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:10.550 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.550 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.550 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.550 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.550 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.550 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.550 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.811 00:18:10.811 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.811 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.811 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.071 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.071 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.071 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.071 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.071 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.071 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:11.071 { 00:18:11.071 "cntlid": 69, 00:18:11.071 "qid": 0, 00:18:11.071 "state": "enabled", 00:18:11.071 "thread": "nvmf_tgt_poll_group_000", 00:18:11.072 "listen_address": { 00:18:11.072 "trtype": "TCP", 00:18:11.072 "adrfam": "IPv4", 00:18:11.072 "traddr": "10.0.0.2", 00:18:11.072 "trsvcid": "4420" 00:18:11.072 }, 00:18:11.072 "peer_address": { 00:18:11.072 "trtype": "TCP", 00:18:11.072 "adrfam": "IPv4", 00:18:11.072 "traddr": "10.0.0.1", 00:18:11.072 "trsvcid": "37320" 00:18:11.072 }, 00:18:11.072 "auth": { 00:18:11.072 "state": "completed", 00:18:11.072 "digest": "sha384", 00:18:11.072 "dhgroup": "ffdhe3072" 00:18:11.072 } 00:18:11.072 } 00:18:11.072 ]' 00:18:11.072 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:11.072 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:11.072 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:11.072 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:11.072 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:11.072 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.072 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.072 19:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.333 19:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NTgxMzhmZDQ4OWRkZjBhNzRhZDY5ODNkMzUzMjA3ZmIwMTI4MjEzYmU4NTBlNGI3DWv9tw==: --dhchap-ctrl-secret DHHC-1:01:ZGRiNTM1YjAyZjFhOWQ4ZDg5ZGIwZjUwMTVkYjY4YjVdiXTr: 00:18:12.275 19:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.275 19:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:12.275 19:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.275 19:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.275 19:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.275 19:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:12.275 19:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:12.275 19:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:12.275 19:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:18:12.275 19:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:12.275 19:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:12.275 19:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:12.275 19:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:12.275 19:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.275 19:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:12.275 19:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.275 19:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.275 19:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.275 19:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:12.275 19:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:12.535 00:18:12.535 19:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.535 19:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:12.535 19:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.535 19:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.535 19:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.535 19:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.535 19:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.795 19:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.795 19:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.795 { 00:18:12.795 "cntlid": 71, 00:18:12.795 "qid": 0, 00:18:12.795 "state": "enabled", 00:18:12.795 "thread": "nvmf_tgt_poll_group_000", 00:18:12.795 "listen_address": { 00:18:12.795 "trtype": "TCP", 00:18:12.795 "adrfam": "IPv4", 00:18:12.795 "traddr": "10.0.0.2", 00:18:12.795 "trsvcid": "4420" 00:18:12.795 }, 00:18:12.795 "peer_address": { 00:18:12.795 "trtype": "TCP", 00:18:12.795 "adrfam": "IPv4", 00:18:12.795 "traddr": "10.0.0.1", 00:18:12.795 "trsvcid": "37360" 00:18:12.795 }, 00:18:12.795 "auth": { 00:18:12.795 "state": "completed", 00:18:12.795 "digest": "sha384", 00:18:12.795 "dhgroup": "ffdhe3072" 00:18:12.795 } 00:18:12.795 } 00:18:12.795 ]' 00:18:12.795 19:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:12.795 19:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:12.795 19:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.795 19:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:12.795 19:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.795 19:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.795 19:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.795 19:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.056 19:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzQ2OThjZWYxOTUzN2MxY2FlNWU3MmM4ZWJkMmVmYjFiOGYxODNkYjgyOTljMTNmYTQ3ZGRhZjZkOTgwMTFkMLyIbCg=: 00:18:13.627 19:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.627 19:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:13.627 19:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.627 19:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.627 19:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.627 19:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:13.627 19:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.627 19:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:13.627 19:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:13.887 19:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:18:13.887 19:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.887 19:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:13.887 19:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:13.887 19:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:13.887 19:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.887 19:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.887 19:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.887 19:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.887 19:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.887 19:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.887 19:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.148 00:18:14.148 19:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.148 19:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.148 19:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.409 19:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.409 19:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.409 19:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.409 19:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.409 19:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.409 19:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.409 { 00:18:14.409 "cntlid": 73, 00:18:14.409 "qid": 0, 00:18:14.409 "state": "enabled", 00:18:14.409 "thread": "nvmf_tgt_poll_group_000", 00:18:14.409 "listen_address": { 00:18:14.409 "trtype": "TCP", 00:18:14.409 "adrfam": "IPv4", 00:18:14.409 "traddr": "10.0.0.2", 00:18:14.409 "trsvcid": "4420" 00:18:14.409 }, 00:18:14.409 "peer_address": { 00:18:14.409 "trtype": "TCP", 00:18:14.409 "adrfam": "IPv4", 00:18:14.409 "traddr": "10.0.0.1", 00:18:14.409 "trsvcid": "37396" 00:18:14.409 }, 00:18:14.409 "auth": { 00:18:14.409 "state": "completed", 00:18:14.409 "digest": "sha384", 00:18:14.409 "dhgroup": "ffdhe4096" 00:18:14.409 } 00:18:14.409 } 00:18:14.409 ]' 00:18:14.409 19:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.409 19:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:14.409 19:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:14.409 19:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:14.409 19:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:14.409 19:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.409 19:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.409 19:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.670 19:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzkxM2ZlYjk5ODA1ODVjODUxMmVmNjMxMGRhOWU3N2M5YzYyM2Q4Nzg1NTZlMTNk1WKVtA==: --dhchap-ctrl-secret DHHC-1:03:NGY5ODY0YjgwOTA5NzRmNTg2MjQwNjI3MjAzNjQ1MTE0YzcxNzA3MjNmOWVkM2FhZWRmZGFmYTQ4MWQxNzdhMg39dl8=: 00:18:15.615 19:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.615 19:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:15.615 19:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.615 19:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.615 19:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.615 19:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:15.615 19:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:15.615 19:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:15.615 19:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:18:15.615 19:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.615 19:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:15.615 19:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:15.615 19:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:15.615 19:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.615 19:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.615 19:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.615 19:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.615 19:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.615 19:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.615 19:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.875 00:18:15.875 19:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.875 19:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.875 19:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.135 19:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.135 19:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.135 19:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.135 19:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.135 19:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.135 19:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.135 { 00:18:16.135 "cntlid": 75, 00:18:16.135 "qid": 0, 00:18:16.135 "state": "enabled", 00:18:16.135 "thread": "nvmf_tgt_poll_group_000", 00:18:16.135 "listen_address": { 00:18:16.135 "trtype": "TCP", 00:18:16.135 "adrfam": "IPv4", 00:18:16.135 "traddr": "10.0.0.2", 00:18:16.135 "trsvcid": "4420" 00:18:16.135 }, 00:18:16.135 "peer_address": { 00:18:16.135 "trtype": "TCP", 00:18:16.135 "adrfam": "IPv4", 00:18:16.135 "traddr": "10.0.0.1", 00:18:16.135 "trsvcid": "37420" 00:18:16.135 }, 00:18:16.135 "auth": { 00:18:16.135 "state": "completed", 00:18:16.135 "digest": "sha384", 00:18:16.135 "dhgroup": "ffdhe4096" 00:18:16.135 } 00:18:16.135 } 00:18:16.135 ]' 00:18:16.135 19:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.135 19:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:16.135 19:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.135 19:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:16.135 19:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.135 19:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.135 19:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.135 19:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.396 19:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YmRhMjM3NjlhYjhjOGQxMjg3NTM0OGM2ZjMwMzcyYTbaloD7: --dhchap-ctrl-secret DHHC-1:02:ZjdjZmRkNDE3NzI1NzQyNDZkNDBiYzc4NmM2ZTM4NjY1MzE1MTc2NDU3MDdjZWM0oiehCA==: 00:18:16.967 19:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.967 19:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:16.967 19:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.967 19:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.967 19:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.967 19:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.967 19:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:16.967 19:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:17.228 19:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:18:17.228 19:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.228 19:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:17.228 19:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:17.228 19:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:17.228 19:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.228 19:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.228 19:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.228 19:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.228 19:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.228 19:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.229 19:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.489 00:18:17.489 19:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.489 19:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.489 19:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.750 19:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.750 19:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.750 19:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.750 19:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.750 19:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.750 19:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.750 { 00:18:17.750 "cntlid": 77, 00:18:17.750 "qid": 0, 00:18:17.750 "state": "enabled", 00:18:17.750 "thread": "nvmf_tgt_poll_group_000", 00:18:17.750 "listen_address": { 00:18:17.750 "trtype": "TCP", 00:18:17.750 "adrfam": "IPv4", 00:18:17.750 "traddr": "10.0.0.2", 00:18:17.750 "trsvcid": "4420" 00:18:17.750 }, 00:18:17.750 "peer_address": { 00:18:17.750 "trtype": "TCP", 00:18:17.750 "adrfam": "IPv4", 00:18:17.750 "traddr": "10.0.0.1", 00:18:17.750 "trsvcid": "34584" 00:18:17.750 }, 00:18:17.750 "auth": { 00:18:17.750 "state": "completed", 00:18:17.750 "digest": "sha384", 00:18:17.750 "dhgroup": "ffdhe4096" 00:18:17.750 } 00:18:17.750 } 00:18:17.750 ]' 00:18:17.750 19:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.750 19:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:17.750 19:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:17.750 19:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:17.750 19:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:17.750 19:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.750 19:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.750 19:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.011 19:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NTgxMzhmZDQ4OWRkZjBhNzRhZDY5ODNkMzUzMjA3ZmIwMTI4MjEzYmU4NTBlNGI3DWv9tw==: --dhchap-ctrl-secret DHHC-1:01:ZGRiNTM1YjAyZjFhOWQ4ZDg5ZGIwZjUwMTVkYjY4YjVdiXTr: 00:18:18.954 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.954 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:18.954 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.954 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.954 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.954 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.954 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:18.954 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:18.954 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:18:18.954 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:18.954 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:18.954 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:18.954 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:18.954 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.954 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:18.954 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.954 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.954 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.954 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:18.954 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:19.216 00:18:19.216 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.216 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.216 19:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.216 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.216 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.216 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.216 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.216 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.216 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.216 { 00:18:19.216 "cntlid": 79, 00:18:19.216 "qid": 0, 00:18:19.216 "state": "enabled", 00:18:19.216 "thread": "nvmf_tgt_poll_group_000", 00:18:19.216 "listen_address": { 00:18:19.216 "trtype": "TCP", 00:18:19.216 "adrfam": "IPv4", 00:18:19.216 "traddr": "10.0.0.2", 00:18:19.216 "trsvcid": "4420" 00:18:19.216 }, 00:18:19.216 "peer_address": { 00:18:19.216 "trtype": "TCP", 00:18:19.216 "adrfam": "IPv4", 00:18:19.216 "traddr": "10.0.0.1", 00:18:19.216 "trsvcid": "34604" 00:18:19.216 }, 00:18:19.216 "auth": { 00:18:19.216 "state": "completed", 00:18:19.216 "digest": "sha384", 00:18:19.216 "dhgroup": "ffdhe4096" 00:18:19.216 } 00:18:19.216 } 00:18:19.216 ]' 00:18:19.216 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.477 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:19.477 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:19.477 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:19.477 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:19.477 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.477 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.477 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.738 19:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzQ2OThjZWYxOTUzN2MxY2FlNWU3MmM4ZWJkMmVmYjFiOGYxODNkYjgyOTljMTNmYTQ3ZGRhZjZkOTgwMTFkMLyIbCg=: 00:18:20.311 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.311 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:20.311 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.311 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.311 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.311 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:20.311 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:20.311 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:20.311 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:20.571 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:18:20.572 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.572 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:20.572 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:20.572 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:20.572 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.572 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.572 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.572 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.572 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.572 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.572 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.832 00:18:20.832 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:20.832 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:20.832 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.094 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.094 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.094 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.094 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.094 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.094 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.094 { 00:18:21.094 "cntlid": 81, 00:18:21.094 "qid": 0, 00:18:21.094 "state": "enabled", 00:18:21.094 "thread": "nvmf_tgt_poll_group_000", 00:18:21.094 "listen_address": { 00:18:21.094 "trtype": "TCP", 00:18:21.094 "adrfam": "IPv4", 00:18:21.094 "traddr": "10.0.0.2", 00:18:21.094 "trsvcid": "4420" 00:18:21.094 }, 00:18:21.094 "peer_address": { 00:18:21.094 "trtype": "TCP", 00:18:21.094 "adrfam": "IPv4", 00:18:21.094 "traddr": "10.0.0.1", 00:18:21.094 "trsvcid": "34634" 00:18:21.094 }, 00:18:21.094 "auth": { 00:18:21.094 "state": "completed", 00:18:21.094 "digest": "sha384", 00:18:21.094 "dhgroup": "ffdhe6144" 00:18:21.094 } 00:18:21.094 } 00:18:21.094 ]' 00:18:21.094 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.094 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:21.094 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.094 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:21.094 19:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:21.094 19:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.094 19:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.094 19:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.355 19:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzkxM2ZlYjk5ODA1ODVjODUxMmVmNjMxMGRhOWU3N2M5YzYyM2Q4Nzg1NTZlMTNk1WKVtA==: --dhchap-ctrl-secret DHHC-1:03:NGY5ODY0YjgwOTA5NzRmNTg2MjQwNjI3MjAzNjQ1MTE0YzcxNzA3MjNmOWVkM2FhZWRmZGFmYTQ4MWQxNzdhMg39dl8=: 00:18:22.299 19:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.299 19:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:22.299 19:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.299 19:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.299 19:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.299 19:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:22.299 19:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:22.299 19:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:22.299 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:18:22.299 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:22.299 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:22.299 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:22.299 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:22.299 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.299 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.299 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.299 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.299 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.299 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.299 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.560 00:18:22.560 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:22.560 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:22.560 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.821 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.821 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.821 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.821 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.821 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.821 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.821 { 00:18:22.821 "cntlid": 83, 00:18:22.821 "qid": 0, 00:18:22.821 "state": "enabled", 00:18:22.821 "thread": "nvmf_tgt_poll_group_000", 00:18:22.821 "listen_address": { 00:18:22.821 "trtype": "TCP", 00:18:22.821 "adrfam": "IPv4", 00:18:22.821 "traddr": "10.0.0.2", 00:18:22.821 "trsvcid": "4420" 00:18:22.821 }, 00:18:22.821 "peer_address": { 00:18:22.821 "trtype": "TCP", 00:18:22.821 "adrfam": "IPv4", 00:18:22.821 "traddr": "10.0.0.1", 00:18:22.821 "trsvcid": "34670" 00:18:22.821 }, 00:18:22.821 "auth": { 00:18:22.821 "state": "completed", 00:18:22.821 "digest": "sha384", 00:18:22.821 "dhgroup": "ffdhe6144" 00:18:22.821 } 00:18:22.821 } 00:18:22.821 ]' 00:18:22.821 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.821 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:22.821 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:23.082 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:23.082 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:23.082 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.082 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.082 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.082 19:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YmRhMjM3NjlhYjhjOGQxMjg3NTM0OGM2ZjMwMzcyYTbaloD7: --dhchap-ctrl-secret DHHC-1:02:ZjdjZmRkNDE3NzI1NzQyNDZkNDBiYzc4NmM2ZTM4NjY1MzE1MTc2NDU3MDdjZWM0oiehCA==: 00:18:24.025 19:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.025 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.025 19:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:24.025 19:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.025 19:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.025 19:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.025 19:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.025 19:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:24.025 19:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:24.025 19:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:18:24.025 19:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:24.025 19:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:24.025 19:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:24.025 19:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:24.025 19:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.025 19:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.025 19:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.025 19:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.025 19:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.025 19:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.025 19:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.598 00:18:24.598 19:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:24.598 19:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:24.598 19:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.598 19:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.598 19:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.598 19:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.598 19:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.598 19:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.598 19:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:24.598 { 00:18:24.598 "cntlid": 85, 00:18:24.598 "qid": 0, 00:18:24.598 "state": "enabled", 00:18:24.598 "thread": "nvmf_tgt_poll_group_000", 00:18:24.598 "listen_address": { 00:18:24.598 "trtype": "TCP", 00:18:24.598 "adrfam": "IPv4", 00:18:24.598 "traddr": "10.0.0.2", 00:18:24.598 "trsvcid": "4420" 00:18:24.598 }, 00:18:24.598 "peer_address": { 00:18:24.598 "trtype": "TCP", 00:18:24.598 "adrfam": "IPv4", 00:18:24.598 "traddr": "10.0.0.1", 00:18:24.598 "trsvcid": "34708" 00:18:24.598 }, 00:18:24.598 "auth": { 00:18:24.598 "state": "completed", 00:18:24.598 "digest": "sha384", 00:18:24.598 "dhgroup": "ffdhe6144" 00:18:24.598 } 00:18:24.598 } 00:18:24.598 ]' 00:18:24.598 19:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.598 19:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:24.598 19:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.859 19:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:24.859 19:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.859 19:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.859 19:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.859 19:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.859 19:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NTgxMzhmZDQ4OWRkZjBhNzRhZDY5ODNkMzUzMjA3ZmIwMTI4MjEzYmU4NTBlNGI3DWv9tw==: --dhchap-ctrl-secret DHHC-1:01:ZGRiNTM1YjAyZjFhOWQ4ZDg5ZGIwZjUwMTVkYjY4YjVdiXTr: 00:18:25.801 19:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.801 19:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:25.801 19:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.801 19:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.801 19:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.801 19:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:25.801 19:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:25.802 19:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:25.802 19:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:18:25.802 19:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.802 19:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:25.802 19:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:25.802 19:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:25.802 19:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.802 19:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:25.802 19:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.802 19:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.802 19:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.802 19:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:25.802 19:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:26.062 00:18:26.323 19:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:26.323 19:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:26.323 19:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.323 19:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.323 19:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.323 19:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.323 19:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.323 19:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.323 19:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.323 { 00:18:26.323 "cntlid": 87, 00:18:26.323 "qid": 0, 00:18:26.323 "state": "enabled", 00:18:26.323 "thread": "nvmf_tgt_poll_group_000", 00:18:26.323 "listen_address": { 00:18:26.323 "trtype": "TCP", 00:18:26.323 "adrfam": "IPv4", 00:18:26.323 "traddr": "10.0.0.2", 00:18:26.323 "trsvcid": "4420" 00:18:26.323 }, 00:18:26.323 "peer_address": { 00:18:26.323 "trtype": "TCP", 00:18:26.323 "adrfam": "IPv4", 00:18:26.323 "traddr": "10.0.0.1", 00:18:26.323 "trsvcid": "33610" 00:18:26.323 }, 00:18:26.323 "auth": { 00:18:26.323 "state": "completed", 00:18:26.323 "digest": "sha384", 00:18:26.323 "dhgroup": "ffdhe6144" 00:18:26.323 } 00:18:26.323 } 00:18:26.323 ]' 00:18:26.323 19:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.323 19:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:26.323 19:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.585 19:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:26.585 19:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.585 19:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.585 19:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.585 19:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.585 19:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzQ2OThjZWYxOTUzN2MxY2FlNWU3MmM4ZWJkMmVmYjFiOGYxODNkYjgyOTljMTNmYTQ3ZGRhZjZkOTgwMTFkMLyIbCg=: 00:18:27.528 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.528 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:27.528 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.528 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.528 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.528 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:27.528 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:27.528 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:27.528 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:27.528 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:18:27.528 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:27.528 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:27.528 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:27.528 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:27.528 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.528 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.528 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.528 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.528 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.528 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.528 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.100 00:18:28.100 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:28.100 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:28.100 19:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.362 19:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.362 19:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.362 19:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.362 19:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.362 19:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.362 19:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.362 { 00:18:28.362 "cntlid": 89, 00:18:28.362 "qid": 0, 00:18:28.362 "state": "enabled", 00:18:28.362 "thread": "nvmf_tgt_poll_group_000", 00:18:28.362 "listen_address": { 00:18:28.362 "trtype": "TCP", 00:18:28.362 "adrfam": "IPv4", 00:18:28.362 "traddr": "10.0.0.2", 00:18:28.362 "trsvcid": "4420" 00:18:28.362 }, 00:18:28.362 "peer_address": { 00:18:28.362 "trtype": "TCP", 00:18:28.362 "adrfam": "IPv4", 00:18:28.362 "traddr": "10.0.0.1", 00:18:28.362 "trsvcid": "33636" 00:18:28.362 }, 00:18:28.362 "auth": { 00:18:28.362 "state": "completed", 00:18:28.362 "digest": "sha384", 00:18:28.362 "dhgroup": "ffdhe8192" 00:18:28.362 } 00:18:28.362 } 00:18:28.362 ]' 00:18:28.362 19:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.362 19:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:28.362 19:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.362 19:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:28.362 19:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.362 19:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.362 19:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.362 19:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.623 19:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzkxM2ZlYjk5ODA1ODVjODUxMmVmNjMxMGRhOWU3N2M5YzYyM2Q4Nzg1NTZlMTNk1WKVtA==: --dhchap-ctrl-secret DHHC-1:03:NGY5ODY0YjgwOTA5NzRmNTg2MjQwNjI3MjAzNjQ1MTE0YzcxNzA3MjNmOWVkM2FhZWRmZGFmYTQ4MWQxNzdhMg39dl8=: 00:18:29.567 19:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.567 19:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:29.567 19:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.567 19:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.567 19:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.567 19:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.567 19:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:29.567 19:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:29.567 19:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:18:29.567 19:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:29.567 19:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:29.567 19:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:29.567 19:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:29.567 19:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.567 19:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.567 19:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.567 19:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.567 19:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.567 19:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.567 19:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.139 00:18:30.139 19:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:30.139 19:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:30.139 19:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.139 19:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.139 19:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.139 19:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.139 19:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.431 19:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.431 19:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:30.431 { 00:18:30.431 "cntlid": 91, 00:18:30.431 "qid": 0, 00:18:30.431 "state": "enabled", 00:18:30.431 "thread": "nvmf_tgt_poll_group_000", 00:18:30.431 "listen_address": { 00:18:30.431 "trtype": "TCP", 00:18:30.431 "adrfam": "IPv4", 00:18:30.431 "traddr": "10.0.0.2", 00:18:30.431 "trsvcid": "4420" 00:18:30.431 }, 00:18:30.431 "peer_address": { 00:18:30.431 "trtype": "TCP", 00:18:30.431 "adrfam": "IPv4", 00:18:30.431 "traddr": "10.0.0.1", 00:18:30.431 "trsvcid": "33664" 00:18:30.431 }, 00:18:30.431 "auth": { 00:18:30.431 "state": "completed", 00:18:30.431 "digest": "sha384", 00:18:30.431 "dhgroup": "ffdhe8192" 00:18:30.431 } 00:18:30.431 } 00:18:30.431 ]' 00:18:30.431 19:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:30.431 19:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:30.431 19:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.431 19:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:30.431 19:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:30.431 19:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.431 19:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.431 19:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.692 19:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YmRhMjM3NjlhYjhjOGQxMjg3NTM0OGM2ZjMwMzcyYTbaloD7: --dhchap-ctrl-secret DHHC-1:02:ZjdjZmRkNDE3NzI1NzQyNDZkNDBiYzc4NmM2ZTM4NjY1MzE1MTc2NDU3MDdjZWM0oiehCA==: 00:18:31.265 19:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.265 19:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:31.265 19:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.265 19:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.265 19:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.265 19:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.265 19:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:31.265 19:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:31.526 19:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:18:31.526 19:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.526 19:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:31.526 19:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:31.526 19:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:31.526 19:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.526 19:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.526 19:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.526 19:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.526 19:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.527 19:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.527 19:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.098 00:18:32.098 19:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.099 19:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.099 19:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.099 19:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.360 19:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.360 19:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.360 19:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.360 19:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.360 19:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.360 { 00:18:32.360 "cntlid": 93, 00:18:32.360 "qid": 0, 00:18:32.360 "state": "enabled", 00:18:32.360 "thread": "nvmf_tgt_poll_group_000", 00:18:32.360 "listen_address": { 00:18:32.360 "trtype": "TCP", 00:18:32.360 "adrfam": "IPv4", 00:18:32.360 "traddr": "10.0.0.2", 00:18:32.360 "trsvcid": "4420" 00:18:32.360 }, 00:18:32.360 "peer_address": { 00:18:32.360 "trtype": "TCP", 00:18:32.360 "adrfam": "IPv4", 00:18:32.360 "traddr": "10.0.0.1", 00:18:32.360 "trsvcid": "33678" 00:18:32.360 }, 00:18:32.360 "auth": { 00:18:32.360 "state": "completed", 00:18:32.360 "digest": "sha384", 00:18:32.360 "dhgroup": "ffdhe8192" 00:18:32.360 } 00:18:32.360 } 00:18:32.360 ]' 00:18:32.360 19:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.360 19:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:32.360 19:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.360 19:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:32.360 19:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.360 19:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.360 19:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.360 19:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.621 19:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NTgxMzhmZDQ4OWRkZjBhNzRhZDY5ODNkMzUzMjA3ZmIwMTI4MjEzYmU4NTBlNGI3DWv9tw==: --dhchap-ctrl-secret DHHC-1:01:ZGRiNTM1YjAyZjFhOWQ4ZDg5ZGIwZjUwMTVkYjY4YjVdiXTr: 00:18:33.192 19:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.192 19:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:33.193 19:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.193 19:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.193 19:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.193 19:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.193 19:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:33.193 19:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:33.453 19:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:18:33.453 19:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:33.453 19:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:33.453 19:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:33.453 19:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:33.453 19:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.454 19:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:33.454 19:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.454 19:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.454 19:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.454 19:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:33.454 19:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:34.025 00:18:34.025 19:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.025 19:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.025 19:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.286 19:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.286 19:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.286 19:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.286 19:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.286 19:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.286 19:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.286 { 00:18:34.286 "cntlid": 95, 00:18:34.286 "qid": 0, 00:18:34.286 "state": "enabled", 00:18:34.286 "thread": "nvmf_tgt_poll_group_000", 00:18:34.286 "listen_address": { 00:18:34.286 "trtype": "TCP", 00:18:34.286 "adrfam": "IPv4", 00:18:34.286 "traddr": "10.0.0.2", 00:18:34.286 "trsvcid": "4420" 00:18:34.286 }, 00:18:34.286 "peer_address": { 00:18:34.286 "trtype": "TCP", 00:18:34.286 "adrfam": "IPv4", 00:18:34.286 "traddr": "10.0.0.1", 00:18:34.286 "trsvcid": "33714" 00:18:34.286 }, 00:18:34.286 "auth": { 00:18:34.286 "state": "completed", 00:18:34.286 "digest": "sha384", 00:18:34.286 "dhgroup": "ffdhe8192" 00:18:34.286 } 00:18:34.286 } 00:18:34.286 ]' 00:18:34.286 19:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.286 19:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:34.286 19:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.286 19:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:34.286 19:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.286 19:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.286 19:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.286 19:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.547 19:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzQ2OThjZWYxOTUzN2MxY2FlNWU3MmM4ZWJkMmVmYjFiOGYxODNkYjgyOTljMTNmYTQ3ZGRhZjZkOTgwMTFkMLyIbCg=: 00:18:35.119 19:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.119 19:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:35.119 19:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.119 19:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.119 19:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.119 19:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:35.119 19:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:35.119 19:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.119 19:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:35.119 19:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:35.380 19:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:18:35.380 19:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:35.380 19:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:35.380 19:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:35.380 19:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:35.380 19:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.380 19:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.380 19:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.380 19:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.380 19:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.380 19:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.380 19:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.641 00:18:35.641 19:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:35.641 19:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:35.641 19:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.903 19:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.903 19:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.903 19:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.903 19:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.903 19:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.903 19:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:35.903 { 00:18:35.903 "cntlid": 97, 00:18:35.903 "qid": 0, 00:18:35.903 "state": "enabled", 00:18:35.903 "thread": "nvmf_tgt_poll_group_000", 00:18:35.903 "listen_address": { 00:18:35.903 "trtype": "TCP", 00:18:35.903 "adrfam": "IPv4", 00:18:35.903 "traddr": "10.0.0.2", 00:18:35.903 "trsvcid": "4420" 00:18:35.903 }, 00:18:35.903 "peer_address": { 00:18:35.903 "trtype": "TCP", 00:18:35.903 "adrfam": "IPv4", 00:18:35.903 "traddr": "10.0.0.1", 00:18:35.903 "trsvcid": "33744" 00:18:35.903 }, 00:18:35.903 "auth": { 00:18:35.903 "state": "completed", 00:18:35.903 "digest": "sha512", 00:18:35.903 "dhgroup": "null" 00:18:35.903 } 00:18:35.903 } 00:18:35.903 ]' 00:18:35.903 19:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:35.903 19:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:35.903 19:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:35.903 19:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:35.903 19:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:35.904 19:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.904 19:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.904 19:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.164 19:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzkxM2ZlYjk5ODA1ODVjODUxMmVmNjMxMGRhOWU3N2M5YzYyM2Q4Nzg1NTZlMTNk1WKVtA==: --dhchap-ctrl-secret DHHC-1:03:NGY5ODY0YjgwOTA5NzRmNTg2MjQwNjI3MjAzNjQ1MTE0YzcxNzA3MjNmOWVkM2FhZWRmZGFmYTQ4MWQxNzdhMg39dl8=: 00:18:36.737 19:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.737 19:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:36.737 19:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.737 19:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.737 19:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.737 19:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:36.737 19:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:36.737 19:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:36.998 19:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:18:36.998 19:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:36.998 19:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:36.998 19:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:36.998 19:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:36.998 19:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.998 19:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.998 19:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.998 19:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.998 19:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.998 19:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.998 19:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.260 00:18:37.260 19:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.260 19:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.260 19:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.260 19:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.260 19:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.260 19:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.260 19:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.260 19:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.260 19:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.260 { 00:18:37.260 "cntlid": 99, 00:18:37.260 "qid": 0, 00:18:37.260 "state": "enabled", 00:18:37.260 "thread": "nvmf_tgt_poll_group_000", 00:18:37.260 "listen_address": { 00:18:37.260 "trtype": "TCP", 00:18:37.260 "adrfam": "IPv4", 00:18:37.260 "traddr": "10.0.0.2", 00:18:37.260 "trsvcid": "4420" 00:18:37.260 }, 00:18:37.260 "peer_address": { 00:18:37.260 "trtype": "TCP", 00:18:37.260 "adrfam": "IPv4", 00:18:37.260 "traddr": "10.0.0.1", 00:18:37.260 "trsvcid": "42028" 00:18:37.260 }, 00:18:37.260 "auth": { 00:18:37.260 "state": "completed", 00:18:37.260 "digest": "sha512", 00:18:37.260 "dhgroup": "null" 00:18:37.260 } 00:18:37.260 } 00:18:37.260 ]' 00:18:37.260 19:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.260 19:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:37.260 19:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:37.521 19:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:37.521 19:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:37.521 19:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.521 19:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.521 19:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.521 19:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YmRhMjM3NjlhYjhjOGQxMjg3NTM0OGM2ZjMwMzcyYTbaloD7: --dhchap-ctrl-secret DHHC-1:02:ZjdjZmRkNDE3NzI1NzQyNDZkNDBiYzc4NmM2ZTM4NjY1MzE1MTc2NDU3MDdjZWM0oiehCA==: 00:18:38.464 19:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.464 19:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:38.464 19:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.464 19:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.464 19:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.464 19:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.464 19:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:38.464 19:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:38.464 19:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:18:38.464 19:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.464 19:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:38.464 19:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:38.464 19:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:38.464 19:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.464 19:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.464 19:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.464 19:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.464 19:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.464 19:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.464 19:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.725 00:18:38.725 19:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:38.726 19:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:38.726 19:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.986 19:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.986 19:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.986 19:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.986 19:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.986 19:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.986 19:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:38.986 { 00:18:38.986 "cntlid": 101, 00:18:38.986 "qid": 0, 00:18:38.986 "state": "enabled", 00:18:38.986 "thread": "nvmf_tgt_poll_group_000", 00:18:38.986 "listen_address": { 00:18:38.986 "trtype": "TCP", 00:18:38.986 "adrfam": "IPv4", 00:18:38.986 "traddr": "10.0.0.2", 00:18:38.986 "trsvcid": "4420" 00:18:38.986 }, 00:18:38.986 "peer_address": { 00:18:38.986 "trtype": "TCP", 00:18:38.986 "adrfam": "IPv4", 00:18:38.986 "traddr": "10.0.0.1", 00:18:38.986 "trsvcid": "42058" 00:18:38.986 }, 00:18:38.986 "auth": { 00:18:38.986 "state": "completed", 00:18:38.986 "digest": "sha512", 00:18:38.986 "dhgroup": "null" 00:18:38.986 } 00:18:38.986 } 00:18:38.986 ]' 00:18:38.986 19:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:38.986 19:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:38.986 19:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:38.986 19:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:38.986 19:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:38.986 19:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.986 19:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.986 19:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.247 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NTgxMzhmZDQ4OWRkZjBhNzRhZDY5ODNkMzUzMjA3ZmIwMTI4MjEzYmU4NTBlNGI3DWv9tw==: --dhchap-ctrl-secret DHHC-1:01:ZGRiNTM1YjAyZjFhOWQ4ZDg5ZGIwZjUwMTVkYjY4YjVdiXTr: 00:18:40.189 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.189 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:40.189 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.189 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.189 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.189 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.189 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:40.189 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:40.189 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:18:40.189 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.189 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:40.189 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:40.189 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:40.189 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.189 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:40.189 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.189 19:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.189 19:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.189 19:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:40.189 19:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:40.451 00:18:40.451 19:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.451 19:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.451 19:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.451 19:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.451 19:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.451 19:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.451 19:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.451 19:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.451 19:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:40.451 { 00:18:40.451 "cntlid": 103, 00:18:40.451 "qid": 0, 00:18:40.451 "state": "enabled", 00:18:40.451 "thread": "nvmf_tgt_poll_group_000", 00:18:40.451 "listen_address": { 00:18:40.451 "trtype": "TCP", 00:18:40.451 "adrfam": "IPv4", 00:18:40.451 "traddr": "10.0.0.2", 00:18:40.451 "trsvcid": "4420" 00:18:40.451 }, 00:18:40.451 "peer_address": { 00:18:40.451 "trtype": "TCP", 00:18:40.451 "adrfam": "IPv4", 00:18:40.451 "traddr": "10.0.0.1", 00:18:40.451 "trsvcid": "42090" 00:18:40.451 }, 00:18:40.451 "auth": { 00:18:40.451 "state": "completed", 00:18:40.451 "digest": "sha512", 00:18:40.451 "dhgroup": "null" 00:18:40.451 } 00:18:40.451 } 00:18:40.451 ]' 00:18:40.451 19:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:40.711 19:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:40.712 19:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:40.712 19:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:40.712 19:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:40.712 19:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.712 19:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.712 19:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.972 19:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzQ2OThjZWYxOTUzN2MxY2FlNWU3MmM4ZWJkMmVmYjFiOGYxODNkYjgyOTljMTNmYTQ3ZGRhZjZkOTgwMTFkMLyIbCg=: 00:18:41.540 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.540 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:41.540 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.540 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.540 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.540 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:41.540 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:41.540 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:41.540 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:41.799 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:18:41.799 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:41.799 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:41.799 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:41.799 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:41.799 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.799 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.799 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.799 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.799 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.799 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.799 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.059 00:18:42.059 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.059 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.059 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.059 19:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.059 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.059 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.059 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.320 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.320 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.320 { 00:18:42.320 "cntlid": 105, 00:18:42.320 "qid": 0, 00:18:42.320 "state": "enabled", 00:18:42.320 "thread": "nvmf_tgt_poll_group_000", 00:18:42.320 "listen_address": { 00:18:42.320 "trtype": "TCP", 00:18:42.320 "adrfam": "IPv4", 00:18:42.320 "traddr": "10.0.0.2", 00:18:42.320 "trsvcid": "4420" 00:18:42.320 }, 00:18:42.320 "peer_address": { 00:18:42.320 "trtype": "TCP", 00:18:42.320 "adrfam": "IPv4", 00:18:42.320 "traddr": "10.0.0.1", 00:18:42.320 "trsvcid": "42100" 00:18:42.320 }, 00:18:42.320 "auth": { 00:18:42.320 "state": "completed", 00:18:42.320 "digest": "sha512", 00:18:42.320 "dhgroup": "ffdhe2048" 00:18:42.320 } 00:18:42.320 } 00:18:42.320 ]' 00:18:42.320 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.320 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:42.320 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:42.320 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:42.320 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.320 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.320 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.320 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.581 19:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzkxM2ZlYjk5ODA1ODVjODUxMmVmNjMxMGRhOWU3N2M5YzYyM2Q4Nzg1NTZlMTNk1WKVtA==: --dhchap-ctrl-secret DHHC-1:03:NGY5ODY0YjgwOTA5NzRmNTg2MjQwNjI3MjAzNjQ1MTE0YzcxNzA3MjNmOWVkM2FhZWRmZGFmYTQ4MWQxNzdhMg39dl8=: 00:18:43.152 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.152 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:43.152 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.152 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.152 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.152 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:43.152 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:43.152 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:43.412 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:18:43.412 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:43.412 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:43.412 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:43.412 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:43.413 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.413 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.413 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.413 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.413 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.413 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.413 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.672 00:18:43.672 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.672 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.672 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.932 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.932 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.932 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.932 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.932 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.932 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.932 { 00:18:43.932 "cntlid": 107, 00:18:43.932 "qid": 0, 00:18:43.932 "state": "enabled", 00:18:43.932 "thread": "nvmf_tgt_poll_group_000", 00:18:43.932 "listen_address": { 00:18:43.932 "trtype": "TCP", 00:18:43.932 "adrfam": "IPv4", 00:18:43.932 "traddr": "10.0.0.2", 00:18:43.932 "trsvcid": "4420" 00:18:43.932 }, 00:18:43.932 "peer_address": { 00:18:43.932 "trtype": "TCP", 00:18:43.932 "adrfam": "IPv4", 00:18:43.932 "traddr": "10.0.0.1", 00:18:43.932 "trsvcid": "42124" 00:18:43.932 }, 00:18:43.932 "auth": { 00:18:43.932 "state": "completed", 00:18:43.932 "digest": "sha512", 00:18:43.932 "dhgroup": "ffdhe2048" 00:18:43.932 } 00:18:43.932 } 00:18:43.932 ]' 00:18:43.932 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.932 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:43.932 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:43.932 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:43.932 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:43.932 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.932 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.932 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.194 19:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YmRhMjM3NjlhYjhjOGQxMjg3NTM0OGM2ZjMwMzcyYTbaloD7: --dhchap-ctrl-secret DHHC-1:02:ZjdjZmRkNDE3NzI1NzQyNDZkNDBiYzc4NmM2ZTM4NjY1MzE1MTc2NDU3MDdjZWM0oiehCA==: 00:18:45.156 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.156 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:45.156 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.156 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.156 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.156 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:45.156 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:45.156 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:45.156 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:18:45.156 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.156 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:45.156 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:45.156 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:45.156 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.156 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.156 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.156 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.156 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.156 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.156 19:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.416 00:18:45.416 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.416 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.416 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.416 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.416 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.416 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.416 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.416 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.416 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.416 { 00:18:45.416 "cntlid": 109, 00:18:45.416 "qid": 0, 00:18:45.416 "state": "enabled", 00:18:45.416 "thread": "nvmf_tgt_poll_group_000", 00:18:45.416 "listen_address": { 00:18:45.417 "trtype": "TCP", 00:18:45.417 "adrfam": "IPv4", 00:18:45.417 "traddr": "10.0.0.2", 00:18:45.417 "trsvcid": "4420" 00:18:45.417 }, 00:18:45.417 "peer_address": { 00:18:45.417 "trtype": "TCP", 00:18:45.417 "adrfam": "IPv4", 00:18:45.417 "traddr": "10.0.0.1", 00:18:45.417 "trsvcid": "42146" 00:18:45.417 }, 00:18:45.417 "auth": { 00:18:45.417 "state": "completed", 00:18:45.417 "digest": "sha512", 00:18:45.417 "dhgroup": "ffdhe2048" 00:18:45.417 } 00:18:45.417 } 00:18:45.417 ]' 00:18:45.417 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.417 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:45.417 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.677 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:45.677 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.677 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.677 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.677 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.677 19:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NTgxMzhmZDQ4OWRkZjBhNzRhZDY5ODNkMzUzMjA3ZmIwMTI4MjEzYmU4NTBlNGI3DWv9tw==: --dhchap-ctrl-secret DHHC-1:01:ZGRiNTM1YjAyZjFhOWQ4ZDg5ZGIwZjUwMTVkYjY4YjVdiXTr: 00:18:46.617 19:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.617 19:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:46.617 19:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.617 19:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.617 19:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.617 19:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.617 19:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:46.617 19:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:46.617 19:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:18:46.617 19:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.617 19:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:46.617 19:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:46.617 19:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:46.617 19:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.617 19:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:46.617 19:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.617 19:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.617 19:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.617 19:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:46.617 19:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:46.877 00:18:46.877 19:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:46.877 19:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.877 19:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.136 19:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.136 19:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.136 19:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.136 19:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.136 19:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.136 19:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.136 { 00:18:47.136 "cntlid": 111, 00:18:47.136 "qid": 0, 00:18:47.136 "state": "enabled", 00:18:47.136 "thread": "nvmf_tgt_poll_group_000", 00:18:47.136 "listen_address": { 00:18:47.136 "trtype": "TCP", 00:18:47.136 "adrfam": "IPv4", 00:18:47.136 "traddr": "10.0.0.2", 00:18:47.136 "trsvcid": "4420" 00:18:47.136 }, 00:18:47.136 "peer_address": { 00:18:47.136 "trtype": "TCP", 00:18:47.136 "adrfam": "IPv4", 00:18:47.136 "traddr": "10.0.0.1", 00:18:47.136 "trsvcid": "57272" 00:18:47.136 }, 00:18:47.136 "auth": { 00:18:47.136 "state": "completed", 00:18:47.136 "digest": "sha512", 00:18:47.136 "dhgroup": "ffdhe2048" 00:18:47.136 } 00:18:47.136 } 00:18:47.136 ]' 00:18:47.136 19:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.136 19:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:47.136 19:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.136 19:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:47.136 19:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.136 19:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.136 19:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.136 19:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.397 19:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzQ2OThjZWYxOTUzN2MxY2FlNWU3MmM4ZWJkMmVmYjFiOGYxODNkYjgyOTljMTNmYTQ3ZGRhZjZkOTgwMTFkMLyIbCg=: 00:18:48.340 19:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.340 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:48.340 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.340 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.340 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.340 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:48.340 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.340 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:48.340 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:48.340 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:18:48.340 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.340 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:48.340 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:48.340 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:48.340 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.340 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.340 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.340 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.340 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.340 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.340 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.600 00:18:48.600 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:48.600 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.600 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:48.861 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.861 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.861 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.861 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.861 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.861 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:48.861 { 00:18:48.861 "cntlid": 113, 00:18:48.861 "qid": 0, 00:18:48.861 "state": "enabled", 00:18:48.861 "thread": "nvmf_tgt_poll_group_000", 00:18:48.861 "listen_address": { 00:18:48.861 "trtype": "TCP", 00:18:48.861 "adrfam": "IPv4", 00:18:48.861 "traddr": "10.0.0.2", 00:18:48.861 "trsvcid": "4420" 00:18:48.861 }, 00:18:48.861 "peer_address": { 00:18:48.861 "trtype": "TCP", 00:18:48.861 "adrfam": "IPv4", 00:18:48.861 "traddr": "10.0.0.1", 00:18:48.861 "trsvcid": "57288" 00:18:48.861 }, 00:18:48.861 "auth": { 00:18:48.861 "state": "completed", 00:18:48.861 "digest": "sha512", 00:18:48.861 "dhgroup": "ffdhe3072" 00:18:48.861 } 00:18:48.861 } 00:18:48.861 ]' 00:18:48.861 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:48.861 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:48.861 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:48.861 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:48.862 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:48.862 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.862 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.862 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.122 19:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzkxM2ZlYjk5ODA1ODVjODUxMmVmNjMxMGRhOWU3N2M5YzYyM2Q4Nzg1NTZlMTNk1WKVtA==: --dhchap-ctrl-secret DHHC-1:03:NGY5ODY0YjgwOTA5NzRmNTg2MjQwNjI3MjAzNjQ1MTE0YzcxNzA3MjNmOWVkM2FhZWRmZGFmYTQ4MWQxNzdhMg39dl8=: 00:18:49.695 19:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.955 19:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:49.955 19:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.955 19:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.955 19:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.955 19:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:49.955 19:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:49.955 19:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:49.955 19:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:18:49.955 19:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:49.955 19:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:49.955 19:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:49.955 19:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:49.955 19:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.955 19:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.955 19:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.955 19:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.955 19:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.955 19:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.956 19:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.217 00:18:50.217 19:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:50.217 19:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:50.217 19:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.478 19:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.478 19:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.478 19:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.478 19:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.478 19:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.478 19:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:50.478 { 00:18:50.478 "cntlid": 115, 00:18:50.478 "qid": 0, 00:18:50.478 "state": "enabled", 00:18:50.478 "thread": "nvmf_tgt_poll_group_000", 00:18:50.478 "listen_address": { 00:18:50.478 "trtype": "TCP", 00:18:50.478 "adrfam": "IPv4", 00:18:50.478 "traddr": "10.0.0.2", 00:18:50.478 "trsvcid": "4420" 00:18:50.478 }, 00:18:50.478 "peer_address": { 00:18:50.478 "trtype": "TCP", 00:18:50.478 "adrfam": "IPv4", 00:18:50.478 "traddr": "10.0.0.1", 00:18:50.478 "trsvcid": "57324" 00:18:50.478 }, 00:18:50.478 "auth": { 00:18:50.478 "state": "completed", 00:18:50.478 "digest": "sha512", 00:18:50.478 "dhgroup": "ffdhe3072" 00:18:50.478 } 00:18:50.478 } 00:18:50.478 ]' 00:18:50.478 19:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:50.478 19:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:50.478 19:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:50.478 19:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:50.478 19:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:50.478 19:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.478 19:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.478 19:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.739 19:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YmRhMjM3NjlhYjhjOGQxMjg3NTM0OGM2ZjMwMzcyYTbaloD7: --dhchap-ctrl-secret DHHC-1:02:ZjdjZmRkNDE3NzI1NzQyNDZkNDBiYzc4NmM2ZTM4NjY1MzE1MTc2NDU3MDdjZWM0oiehCA==: 00:18:51.312 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.312 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:51.312 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.312 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.312 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.312 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.312 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:51.312 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:51.573 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:18:51.573 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:51.573 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:51.573 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:51.573 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:51.573 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.573 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.573 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.573 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.573 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.573 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.573 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.835 00:18:51.835 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:51.835 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:51.835 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.096 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.096 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.096 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.096 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.096 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.096 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.096 { 00:18:52.096 "cntlid": 117, 00:18:52.096 "qid": 0, 00:18:52.096 "state": "enabled", 00:18:52.096 "thread": "nvmf_tgt_poll_group_000", 00:18:52.096 "listen_address": { 00:18:52.096 "trtype": "TCP", 00:18:52.096 "adrfam": "IPv4", 00:18:52.096 "traddr": "10.0.0.2", 00:18:52.096 "trsvcid": "4420" 00:18:52.096 }, 00:18:52.096 "peer_address": { 00:18:52.096 "trtype": "TCP", 00:18:52.096 "adrfam": "IPv4", 00:18:52.096 "traddr": "10.0.0.1", 00:18:52.096 "trsvcid": "57350" 00:18:52.096 }, 00:18:52.096 "auth": { 00:18:52.096 "state": "completed", 00:18:52.096 "digest": "sha512", 00:18:52.096 "dhgroup": "ffdhe3072" 00:18:52.096 } 00:18:52.096 } 00:18:52.096 ]' 00:18:52.096 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.096 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:52.096 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.096 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:52.096 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.096 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.096 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.096 19:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.357 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NTgxMzhmZDQ4OWRkZjBhNzRhZDY5ODNkMzUzMjA3ZmIwMTI4MjEzYmU4NTBlNGI3DWv9tw==: --dhchap-ctrl-secret DHHC-1:01:ZGRiNTM1YjAyZjFhOWQ4ZDg5ZGIwZjUwMTVkYjY4YjVdiXTr: 00:18:53.299 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.299 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:53.299 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.299 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.299 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.300 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.300 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:53.300 19:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:53.300 19:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:18:53.300 19:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:53.300 19:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:53.300 19:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:53.300 19:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:53.300 19:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.300 19:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:53.300 19:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.300 19:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.300 19:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.300 19:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:53.300 19:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:53.561 00:18:53.561 19:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.561 19:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:53.561 19:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.561 19:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.561 19:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.561 19:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.561 19:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.561 19:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.561 19:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:53.561 { 00:18:53.561 "cntlid": 119, 00:18:53.561 "qid": 0, 00:18:53.561 "state": "enabled", 00:18:53.561 "thread": "nvmf_tgt_poll_group_000", 00:18:53.561 "listen_address": { 00:18:53.561 "trtype": "TCP", 00:18:53.561 "adrfam": "IPv4", 00:18:53.561 "traddr": "10.0.0.2", 00:18:53.561 "trsvcid": "4420" 00:18:53.561 }, 00:18:53.561 "peer_address": { 00:18:53.561 "trtype": "TCP", 00:18:53.561 "adrfam": "IPv4", 00:18:53.561 "traddr": "10.0.0.1", 00:18:53.561 "trsvcid": "57372" 00:18:53.561 }, 00:18:53.561 "auth": { 00:18:53.561 "state": "completed", 00:18:53.561 "digest": "sha512", 00:18:53.561 "dhgroup": "ffdhe3072" 00:18:53.561 } 00:18:53.561 } 00:18:53.561 ]' 00:18:53.561 19:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:53.822 19:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:53.822 19:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:53.822 19:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:53.822 19:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:53.822 19:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.822 19:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.822 19:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.082 19:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzQ2OThjZWYxOTUzN2MxY2FlNWU3MmM4ZWJkMmVmYjFiOGYxODNkYjgyOTljMTNmYTQ3ZGRhZjZkOTgwMTFkMLyIbCg=: 00:18:54.652 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.652 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:54.652 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.652 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.652 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.652 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:54.652 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:54.652 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:54.652 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:54.914 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:18:54.914 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:54.914 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:54.914 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:54.914 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:54.914 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.914 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.914 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.914 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.914 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.914 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.914 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.175 00:18:55.175 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.175 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.175 19:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.435 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.435 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.435 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.435 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.435 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.435 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.435 { 00:18:55.435 "cntlid": 121, 00:18:55.435 "qid": 0, 00:18:55.435 "state": "enabled", 00:18:55.435 "thread": "nvmf_tgt_poll_group_000", 00:18:55.435 "listen_address": { 00:18:55.435 "trtype": "TCP", 00:18:55.435 "adrfam": "IPv4", 00:18:55.435 "traddr": "10.0.0.2", 00:18:55.435 "trsvcid": "4420" 00:18:55.435 }, 00:18:55.435 "peer_address": { 00:18:55.435 "trtype": "TCP", 00:18:55.435 "adrfam": "IPv4", 00:18:55.435 "traddr": "10.0.0.1", 00:18:55.435 "trsvcid": "57400" 00:18:55.435 }, 00:18:55.435 "auth": { 00:18:55.435 "state": "completed", 00:18:55.435 "digest": "sha512", 00:18:55.435 "dhgroup": "ffdhe4096" 00:18:55.435 } 00:18:55.435 } 00:18:55.435 ]' 00:18:55.435 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:55.435 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:55.435 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:55.435 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:55.435 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:55.435 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.435 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.435 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.696 19:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzkxM2ZlYjk5ODA1ODVjODUxMmVmNjMxMGRhOWU3N2M5YzYyM2Q4Nzg1NTZlMTNk1WKVtA==: --dhchap-ctrl-secret DHHC-1:03:NGY5ODY0YjgwOTA5NzRmNTg2MjQwNjI3MjAzNjQ1MTE0YzcxNzA3MjNmOWVkM2FhZWRmZGFmYTQ4MWQxNzdhMg39dl8=: 00:18:56.267 19:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.267 19:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:56.267 19:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.267 19:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.528 19:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.528 19:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.528 19:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:56.528 19:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:56.528 19:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:18:56.528 19:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.528 19:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:56.528 19:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:56.528 19:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:56.528 19:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.528 19:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.528 19:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.528 19:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.528 19:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.528 19:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.528 19:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.789 00:18:56.789 19:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.789 19:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.789 19:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.051 19:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.051 19:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.051 19:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.051 19:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.051 19:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.051 19:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.051 { 00:18:57.051 "cntlid": 123, 00:18:57.051 "qid": 0, 00:18:57.051 "state": "enabled", 00:18:57.051 "thread": "nvmf_tgt_poll_group_000", 00:18:57.051 "listen_address": { 00:18:57.051 "trtype": "TCP", 00:18:57.051 "adrfam": "IPv4", 00:18:57.051 "traddr": "10.0.0.2", 00:18:57.051 "trsvcid": "4420" 00:18:57.051 }, 00:18:57.051 "peer_address": { 00:18:57.051 "trtype": "TCP", 00:18:57.051 "adrfam": "IPv4", 00:18:57.051 "traddr": "10.0.0.1", 00:18:57.051 "trsvcid": "39778" 00:18:57.051 }, 00:18:57.051 "auth": { 00:18:57.051 "state": "completed", 00:18:57.051 "digest": "sha512", 00:18:57.051 "dhgroup": "ffdhe4096" 00:18:57.051 } 00:18:57.051 } 00:18:57.051 ]' 00:18:57.051 19:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.051 19:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:57.051 19:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.051 19:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:57.051 19:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.051 19:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.051 19:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.051 19:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.312 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YmRhMjM3NjlhYjhjOGQxMjg3NTM0OGM2ZjMwMzcyYTbaloD7: --dhchap-ctrl-secret DHHC-1:02:ZjdjZmRkNDE3NzI1NzQyNDZkNDBiYzc4NmM2ZTM4NjY1MzE1MTc2NDU3MDdjZWM0oiehCA==: 00:18:58.255 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.255 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:58.255 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.255 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.255 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.255 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.255 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:58.255 19:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:58.255 19:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:18:58.255 19:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.255 19:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:58.255 19:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:58.255 19:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:58.255 19:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.255 19:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.255 19:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.255 19:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.255 19:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.255 19:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.255 19:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.516 00:18:58.516 19:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.516 19:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.516 19:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.777 19:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.777 19:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.777 19:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.777 19:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.777 19:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.777 19:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:58.777 { 00:18:58.777 "cntlid": 125, 00:18:58.777 "qid": 0, 00:18:58.777 "state": "enabled", 00:18:58.777 "thread": "nvmf_tgt_poll_group_000", 00:18:58.777 "listen_address": { 00:18:58.777 "trtype": "TCP", 00:18:58.777 "adrfam": "IPv4", 00:18:58.777 "traddr": "10.0.0.2", 00:18:58.777 "trsvcid": "4420" 00:18:58.777 }, 00:18:58.777 "peer_address": { 00:18:58.777 "trtype": "TCP", 00:18:58.777 "adrfam": "IPv4", 00:18:58.777 "traddr": "10.0.0.1", 00:18:58.777 "trsvcid": "39800" 00:18:58.777 }, 00:18:58.777 "auth": { 00:18:58.777 "state": "completed", 00:18:58.777 "digest": "sha512", 00:18:58.777 "dhgroup": "ffdhe4096" 00:18:58.777 } 00:18:58.777 } 00:18:58.777 ]' 00:18:58.777 19:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.777 19:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:58.777 19:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:58.777 19:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:58.777 19:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:58.777 19:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.777 19:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.777 19:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.038 19:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NTgxMzhmZDQ4OWRkZjBhNzRhZDY5ODNkMzUzMjA3ZmIwMTI4MjEzYmU4NTBlNGI3DWv9tw==: --dhchap-ctrl-secret DHHC-1:01:ZGRiNTM1YjAyZjFhOWQ4ZDg5ZGIwZjUwMTVkYjY4YjVdiXTr: 00:18:59.609 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.609 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:59.898 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.898 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.898 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.898 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:59.898 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:59.898 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:59.898 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:18:59.898 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:59.898 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:59.898 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:59.898 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:59.898 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.898 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:59.898 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.898 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.898 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.898 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:59.898 19:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:00.170 00:19:00.170 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.170 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.170 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.431 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.431 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.431 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.431 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.431 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.431 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.431 { 00:19:00.431 "cntlid": 127, 00:19:00.431 "qid": 0, 00:19:00.431 "state": "enabled", 00:19:00.431 "thread": "nvmf_tgt_poll_group_000", 00:19:00.431 "listen_address": { 00:19:00.431 "trtype": "TCP", 00:19:00.431 "adrfam": "IPv4", 00:19:00.431 "traddr": "10.0.0.2", 00:19:00.431 "trsvcid": "4420" 00:19:00.431 }, 00:19:00.431 "peer_address": { 00:19:00.431 "trtype": "TCP", 00:19:00.431 "adrfam": "IPv4", 00:19:00.431 "traddr": "10.0.0.1", 00:19:00.431 "trsvcid": "39822" 00:19:00.431 }, 00:19:00.431 "auth": { 00:19:00.431 "state": "completed", 00:19:00.431 "digest": "sha512", 00:19:00.431 "dhgroup": "ffdhe4096" 00:19:00.431 } 00:19:00.431 } 00:19:00.431 ]' 00:19:00.431 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:00.431 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:00.431 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.431 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:00.431 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.431 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.431 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.431 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.692 19:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzQ2OThjZWYxOTUzN2MxY2FlNWU3MmM4ZWJkMmVmYjFiOGYxODNkYjgyOTljMTNmYTQ3ZGRhZjZkOTgwMTFkMLyIbCg=: 00:19:01.264 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.526 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:01.526 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.526 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.526 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.526 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:01.526 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.526 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:01.526 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:01.526 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:19:01.526 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.526 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:01.526 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:01.526 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:01.526 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.526 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.526 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.526 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.526 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.526 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.526 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.099 00:19:02.099 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.099 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.099 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.099 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.099 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.099 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.099 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.099 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.099 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:02.099 { 00:19:02.099 "cntlid": 129, 00:19:02.099 "qid": 0, 00:19:02.099 "state": "enabled", 00:19:02.099 "thread": "nvmf_tgt_poll_group_000", 00:19:02.099 "listen_address": { 00:19:02.099 "trtype": "TCP", 00:19:02.099 "adrfam": "IPv4", 00:19:02.099 "traddr": "10.0.0.2", 00:19:02.099 "trsvcid": "4420" 00:19:02.099 }, 00:19:02.099 "peer_address": { 00:19:02.099 "trtype": "TCP", 00:19:02.099 "adrfam": "IPv4", 00:19:02.099 "traddr": "10.0.0.1", 00:19:02.099 "trsvcid": "39862" 00:19:02.099 }, 00:19:02.099 "auth": { 00:19:02.099 "state": "completed", 00:19:02.099 "digest": "sha512", 00:19:02.099 "dhgroup": "ffdhe6144" 00:19:02.099 } 00:19:02.099 } 00:19:02.099 ]' 00:19:02.099 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.099 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:02.099 19:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.099 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:02.099 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.360 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.360 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.360 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.360 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzkxM2ZlYjk5ODA1ODVjODUxMmVmNjMxMGRhOWU3N2M5YzYyM2Q4Nzg1NTZlMTNk1WKVtA==: --dhchap-ctrl-secret DHHC-1:03:NGY5ODY0YjgwOTA5NzRmNTg2MjQwNjI3MjAzNjQ1MTE0YzcxNzA3MjNmOWVkM2FhZWRmZGFmYTQ4MWQxNzdhMg39dl8=: 00:19:03.303 19:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.303 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:03.303 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.303 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.303 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.303 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:03.303 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:03.303 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:03.303 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:19:03.303 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:03.303 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:03.303 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:03.303 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:03.303 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.303 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.303 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.303 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.303 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.303 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.303 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.564 00:19:03.826 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.826 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.826 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.826 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.826 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.826 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.826 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.826 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.826 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.826 { 00:19:03.826 "cntlid": 131, 00:19:03.826 "qid": 0, 00:19:03.826 "state": "enabled", 00:19:03.826 "thread": "nvmf_tgt_poll_group_000", 00:19:03.826 "listen_address": { 00:19:03.826 "trtype": "TCP", 00:19:03.826 "adrfam": "IPv4", 00:19:03.826 "traddr": "10.0.0.2", 00:19:03.826 "trsvcid": "4420" 00:19:03.826 }, 00:19:03.826 "peer_address": { 00:19:03.826 "trtype": "TCP", 00:19:03.826 "adrfam": "IPv4", 00:19:03.826 "traddr": "10.0.0.1", 00:19:03.826 "trsvcid": "39884" 00:19:03.826 }, 00:19:03.826 "auth": { 00:19:03.826 "state": "completed", 00:19:03.826 "digest": "sha512", 00:19:03.826 "dhgroup": "ffdhe6144" 00:19:03.826 } 00:19:03.826 } 00:19:03.826 ]' 00:19:03.826 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.826 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:03.826 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:04.087 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:04.087 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:04.087 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.087 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.087 19:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.087 19:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YmRhMjM3NjlhYjhjOGQxMjg3NTM0OGM2ZjMwMzcyYTbaloD7: --dhchap-ctrl-secret DHHC-1:02:ZjdjZmRkNDE3NzI1NzQyNDZkNDBiYzc4NmM2ZTM4NjY1MzE1MTc2NDU3MDdjZWM0oiehCA==: 00:19:05.053 19:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.053 19:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:05.053 19:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.053 19:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.053 19:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.053 19:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:05.053 19:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:05.053 19:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:05.053 19:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:19:05.053 19:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:05.053 19:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:05.053 19:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:05.053 19:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:05.053 19:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.053 19:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.053 19:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.053 19:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.053 19:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.053 19:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.053 19:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.626 00:19:05.626 19:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.626 19:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.626 19:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.626 19:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.626 19:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.626 19:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.626 19:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.626 19:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.626 19:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.626 { 00:19:05.626 "cntlid": 133, 00:19:05.626 "qid": 0, 00:19:05.626 "state": "enabled", 00:19:05.626 "thread": "nvmf_tgt_poll_group_000", 00:19:05.626 "listen_address": { 00:19:05.626 "trtype": "TCP", 00:19:05.626 "adrfam": "IPv4", 00:19:05.626 "traddr": "10.0.0.2", 00:19:05.626 "trsvcid": "4420" 00:19:05.626 }, 00:19:05.626 "peer_address": { 00:19:05.626 "trtype": "TCP", 00:19:05.626 "adrfam": "IPv4", 00:19:05.626 "traddr": "10.0.0.1", 00:19:05.626 "trsvcid": "39924" 00:19:05.626 }, 00:19:05.626 "auth": { 00:19:05.626 "state": "completed", 00:19:05.626 "digest": "sha512", 00:19:05.626 "dhgroup": "ffdhe6144" 00:19:05.626 } 00:19:05.626 } 00:19:05.626 ]' 00:19:05.626 19:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.626 19:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:05.626 19:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.626 19:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:05.626 19:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.887 19:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.887 19:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.887 19:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.887 19:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NTgxMzhmZDQ4OWRkZjBhNzRhZDY5ODNkMzUzMjA3ZmIwMTI4MjEzYmU4NTBlNGI3DWv9tw==: --dhchap-ctrl-secret DHHC-1:01:ZGRiNTM1YjAyZjFhOWQ4ZDg5ZGIwZjUwMTVkYjY4YjVdiXTr: 00:19:06.829 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.829 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:06.829 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.829 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.829 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.829 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.829 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:06.829 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:06.829 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:19:06.829 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.829 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:06.829 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:06.829 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:06.829 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.829 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:06.829 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.829 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.829 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.830 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:06.830 19:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:07.403 00:19:07.403 19:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.403 19:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.403 19:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.403 19:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.403 19:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.403 19:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.403 19:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.403 19:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.403 19:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.403 { 00:19:07.403 "cntlid": 135, 00:19:07.403 "qid": 0, 00:19:07.403 "state": "enabled", 00:19:07.403 "thread": "nvmf_tgt_poll_group_000", 00:19:07.403 "listen_address": { 00:19:07.403 "trtype": "TCP", 00:19:07.403 "adrfam": "IPv4", 00:19:07.403 "traddr": "10.0.0.2", 00:19:07.403 "trsvcid": "4420" 00:19:07.403 }, 00:19:07.403 "peer_address": { 00:19:07.403 "trtype": "TCP", 00:19:07.403 "adrfam": "IPv4", 00:19:07.403 "traddr": "10.0.0.1", 00:19:07.403 "trsvcid": "43088" 00:19:07.403 }, 00:19:07.403 "auth": { 00:19:07.403 "state": "completed", 00:19:07.403 "digest": "sha512", 00:19:07.403 "dhgroup": "ffdhe6144" 00:19:07.403 } 00:19:07.403 } 00:19:07.403 ]' 00:19:07.403 19:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.403 19:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:07.403 19:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.403 19:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:07.403 19:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.665 19:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.665 19:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.665 19:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.665 19:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzQ2OThjZWYxOTUzN2MxY2FlNWU3MmM4ZWJkMmVmYjFiOGYxODNkYjgyOTljMTNmYTQ3ZGRhZjZkOTgwMTFkMLyIbCg=: 00:19:08.237 19:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.237 19:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:08.237 19:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.237 19:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.237 19:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.237 19:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:08.237 19:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.237 19:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:08.237 19:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:08.499 19:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:19:08.499 19:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:08.499 19:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:08.499 19:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:08.499 19:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:08.499 19:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.499 19:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.499 19:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.499 19:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.499 19:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.499 19:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.499 19:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.071 00:19:09.071 19:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.071 19:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.071 19:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.071 19:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.071 19:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.071 19:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.071 19:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.332 19:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.332 19:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.332 { 00:19:09.332 "cntlid": 137, 00:19:09.332 "qid": 0, 00:19:09.332 "state": "enabled", 00:19:09.332 "thread": "nvmf_tgt_poll_group_000", 00:19:09.332 "listen_address": { 00:19:09.332 "trtype": "TCP", 00:19:09.332 "adrfam": "IPv4", 00:19:09.332 "traddr": "10.0.0.2", 00:19:09.332 "trsvcid": "4420" 00:19:09.332 }, 00:19:09.332 "peer_address": { 00:19:09.332 "trtype": "TCP", 00:19:09.332 "adrfam": "IPv4", 00:19:09.332 "traddr": "10.0.0.1", 00:19:09.332 "trsvcid": "43108" 00:19:09.332 }, 00:19:09.332 "auth": { 00:19:09.332 "state": "completed", 00:19:09.332 "digest": "sha512", 00:19:09.332 "dhgroup": "ffdhe8192" 00:19:09.332 } 00:19:09.332 } 00:19:09.332 ]' 00:19:09.332 19:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.332 19:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:09.332 19:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.332 19:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:09.332 19:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:09.332 19:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.332 19:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.332 19:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.593 19:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzkxM2ZlYjk5ODA1ODVjODUxMmVmNjMxMGRhOWU3N2M5YzYyM2Q4Nzg1NTZlMTNk1WKVtA==: --dhchap-ctrl-secret DHHC-1:03:NGY5ODY0YjgwOTA5NzRmNTg2MjQwNjI3MjAzNjQ1MTE0YzcxNzA3MjNmOWVkM2FhZWRmZGFmYTQ4MWQxNzdhMg39dl8=: 00:19:10.164 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.164 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:10.164 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.164 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.164 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.164 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.164 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:10.164 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:10.425 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:19:10.425 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.426 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:10.426 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:10.426 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:10.426 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.426 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.426 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.426 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.426 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.426 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.426 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.998 00:19:10.998 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:10.998 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:10.998 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.259 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.259 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.259 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.259 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.259 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.259 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.259 { 00:19:11.259 "cntlid": 139, 00:19:11.259 "qid": 0, 00:19:11.259 "state": "enabled", 00:19:11.259 "thread": "nvmf_tgt_poll_group_000", 00:19:11.259 "listen_address": { 00:19:11.259 "trtype": "TCP", 00:19:11.259 "adrfam": "IPv4", 00:19:11.259 "traddr": "10.0.0.2", 00:19:11.259 "trsvcid": "4420" 00:19:11.259 }, 00:19:11.259 "peer_address": { 00:19:11.259 "trtype": "TCP", 00:19:11.259 "adrfam": "IPv4", 00:19:11.259 "traddr": "10.0.0.1", 00:19:11.259 "trsvcid": "43128" 00:19:11.259 }, 00:19:11.259 "auth": { 00:19:11.259 "state": "completed", 00:19:11.259 "digest": "sha512", 00:19:11.259 "dhgroup": "ffdhe8192" 00:19:11.259 } 00:19:11.259 } 00:19:11.259 ]' 00:19:11.259 19:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.259 19:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:11.259 19:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.259 19:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:11.259 19:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.259 19:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.259 19:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.259 19:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.521 19:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YmRhMjM3NjlhYjhjOGQxMjg3NTM0OGM2ZjMwMzcyYTbaloD7: --dhchap-ctrl-secret DHHC-1:02:ZjdjZmRkNDE3NzI1NzQyNDZkNDBiYzc4NmM2ZTM4NjY1MzE1MTc2NDU3MDdjZWM0oiehCA==: 00:19:12.091 19:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.353 19:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:12.353 19:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.353 19:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.353 19:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.353 19:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.353 19:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:12.353 19:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:12.353 19:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:19:12.353 19:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.353 19:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:12.353 19:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:12.353 19:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:12.353 19:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.353 19:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.353 19:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.353 19:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.353 19:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.353 19:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.353 19:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.925 00:19:12.925 19:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.925 19:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.925 19:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.186 19:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.186 19:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.186 19:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.186 19:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.186 19:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.186 19:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.186 { 00:19:13.186 "cntlid": 141, 00:19:13.186 "qid": 0, 00:19:13.186 "state": "enabled", 00:19:13.186 "thread": "nvmf_tgt_poll_group_000", 00:19:13.186 "listen_address": { 00:19:13.186 "trtype": "TCP", 00:19:13.186 "adrfam": "IPv4", 00:19:13.186 "traddr": "10.0.0.2", 00:19:13.186 "trsvcid": "4420" 00:19:13.186 }, 00:19:13.186 "peer_address": { 00:19:13.186 "trtype": "TCP", 00:19:13.186 "adrfam": "IPv4", 00:19:13.186 "traddr": "10.0.0.1", 00:19:13.186 "trsvcid": "43160" 00:19:13.186 }, 00:19:13.186 "auth": { 00:19:13.186 "state": "completed", 00:19:13.186 "digest": "sha512", 00:19:13.186 "dhgroup": "ffdhe8192" 00:19:13.186 } 00:19:13.186 } 00:19:13.186 ]' 00:19:13.186 19:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.186 19:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:13.186 19:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.186 19:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:13.186 19:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.186 19:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.186 19:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.186 19:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.447 19:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NTgxMzhmZDQ4OWRkZjBhNzRhZDY5ODNkMzUzMjA3ZmIwMTI4MjEzYmU4NTBlNGI3DWv9tw==: --dhchap-ctrl-secret DHHC-1:01:ZGRiNTM1YjAyZjFhOWQ4ZDg5ZGIwZjUwMTVkYjY4YjVdiXTr: 00:19:14.391 19:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.391 19:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:14.391 19:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.391 19:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.391 19:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.391 19:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:14.392 19:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:14.392 19:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:14.392 19:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:19:14.392 19:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:14.392 19:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:14.392 19:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:14.392 19:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:14.392 19:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.392 19:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:14.392 19:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.392 19:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.392 19:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.392 19:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:14.392 19:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:14.967 00:19:14.967 19:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.967 19:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.967 19:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.967 19:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.967 19:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.967 19:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.967 19:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.967 19:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.967 19:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.967 { 00:19:14.967 "cntlid": 143, 00:19:14.967 "qid": 0, 00:19:14.967 "state": "enabled", 00:19:14.967 "thread": "nvmf_tgt_poll_group_000", 00:19:14.967 "listen_address": { 00:19:14.967 "trtype": "TCP", 00:19:14.967 "adrfam": "IPv4", 00:19:14.967 "traddr": "10.0.0.2", 00:19:14.967 "trsvcid": "4420" 00:19:14.967 }, 00:19:14.967 "peer_address": { 00:19:14.967 "trtype": "TCP", 00:19:14.967 "adrfam": "IPv4", 00:19:14.967 "traddr": "10.0.0.1", 00:19:14.967 "trsvcid": "43200" 00:19:14.967 }, 00:19:14.967 "auth": { 00:19:14.967 "state": "completed", 00:19:14.967 "digest": "sha512", 00:19:14.967 "dhgroup": "ffdhe8192" 00:19:14.967 } 00:19:14.967 } 00:19:14.967 ]' 00:19:15.237 19:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.237 19:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:15.237 19:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.237 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:15.237 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.237 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.237 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.237 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.508 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzQ2OThjZWYxOTUzN2MxY2FlNWU3MmM4ZWJkMmVmYjFiOGYxODNkYjgyOTljMTNmYTQ3ZGRhZjZkOTgwMTFkMLyIbCg=: 00:19:16.081 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.081 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:16.081 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.081 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.081 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.081 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:16.081 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:19:16.081 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:16.081 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:16.081 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:16.081 19:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:16.343 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:19:16.343 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:16.343 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:16.343 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:16.343 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:16.343 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.343 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.343 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.343 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.343 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.343 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.343 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.920 00:19:16.920 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.920 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.920 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.920 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.920 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.920 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.920 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.920 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.920 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:16.920 { 00:19:16.920 "cntlid": 145, 00:19:16.920 "qid": 0, 00:19:16.920 "state": "enabled", 00:19:16.920 "thread": "nvmf_tgt_poll_group_000", 00:19:16.920 "listen_address": { 00:19:16.920 "trtype": "TCP", 00:19:16.920 "adrfam": "IPv4", 00:19:16.920 "traddr": "10.0.0.2", 00:19:16.920 "trsvcid": "4420" 00:19:16.920 }, 00:19:16.920 "peer_address": { 00:19:16.920 "trtype": "TCP", 00:19:16.920 "adrfam": "IPv4", 00:19:16.920 "traddr": "10.0.0.1", 00:19:16.920 "trsvcid": "50756" 00:19:16.920 }, 00:19:16.920 "auth": { 00:19:16.920 "state": "completed", 00:19:16.920 "digest": "sha512", 00:19:16.920 "dhgroup": "ffdhe8192" 00:19:16.920 } 00:19:16.920 } 00:19:16.920 ]' 00:19:16.920 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.181 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:17.181 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.181 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:17.181 19:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.181 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.181 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.181 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.443 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzkxM2ZlYjk5ODA1ODVjODUxMmVmNjMxMGRhOWU3N2M5YzYyM2Q4Nzg1NTZlMTNk1WKVtA==: --dhchap-ctrl-secret DHHC-1:03:NGY5ODY0YjgwOTA5NzRmNTg2MjQwNjI3MjAzNjQ1MTE0YzcxNzA3MjNmOWVkM2FhZWRmZGFmYTQ4MWQxNzdhMg39dl8=: 00:19:18.016 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.016 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:18.016 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.016 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.016 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.016 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:18.016 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.016 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.016 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.016 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:18.016 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:18.016 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:18.016 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:18.016 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:18.016 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:18.016 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:18.016 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:18.016 19:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:18.589 request: 00:19:18.589 { 00:19:18.589 "name": "nvme0", 00:19:18.589 "trtype": "tcp", 00:19:18.589 "traddr": "10.0.0.2", 00:19:18.589 "adrfam": "ipv4", 00:19:18.589 "trsvcid": "4420", 00:19:18.589 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:18.589 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:18.589 "prchk_reftag": false, 00:19:18.589 "prchk_guard": false, 00:19:18.589 "hdgst": false, 00:19:18.589 "ddgst": false, 00:19:18.589 "dhchap_key": "key2", 00:19:18.589 "method": "bdev_nvme_attach_controller", 00:19:18.589 "req_id": 1 00:19:18.589 } 00:19:18.589 Got JSON-RPC error response 00:19:18.589 response: 00:19:18.589 { 00:19:18.589 "code": -5, 00:19:18.589 "message": "Input/output error" 00:19:18.589 } 00:19:18.589 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:18.589 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:18.589 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:18.589 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:18.589 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:18.589 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.589 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.589 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.589 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.589 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.589 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.589 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.589 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:18.589 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:18.589 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:18.589 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:18.589 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:18.589 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:18.589 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:18.589 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:18.589 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:19.162 request: 00:19:19.162 { 00:19:19.162 "name": "nvme0", 00:19:19.162 "trtype": "tcp", 00:19:19.162 "traddr": "10.0.0.2", 00:19:19.162 "adrfam": "ipv4", 00:19:19.162 "trsvcid": "4420", 00:19:19.162 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:19.162 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:19.162 "prchk_reftag": false, 00:19:19.162 "prchk_guard": false, 00:19:19.162 "hdgst": false, 00:19:19.162 "ddgst": false, 00:19:19.162 "dhchap_key": "key1", 00:19:19.162 "dhchap_ctrlr_key": "ckey2", 00:19:19.162 "method": "bdev_nvme_attach_controller", 00:19:19.162 "req_id": 1 00:19:19.162 } 00:19:19.162 Got JSON-RPC error response 00:19:19.162 response: 00:19:19.162 { 00:19:19.162 "code": -5, 00:19:19.162 "message": "Input/output error" 00:19:19.162 } 00:19:19.162 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:19.162 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:19.162 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:19.162 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:19.162 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:19.162 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.162 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.162 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.162 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:19.162 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.163 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.163 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.163 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.163 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:19.163 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.163 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:19.163 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:19.163 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:19.163 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:19.163 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.163 19:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.736 request: 00:19:19.736 { 00:19:19.736 "name": "nvme0", 00:19:19.736 "trtype": "tcp", 00:19:19.736 "traddr": "10.0.0.2", 00:19:19.736 "adrfam": "ipv4", 00:19:19.736 "trsvcid": "4420", 00:19:19.736 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:19.736 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:19.736 "prchk_reftag": false, 00:19:19.736 "prchk_guard": false, 00:19:19.736 "hdgst": false, 00:19:19.736 "ddgst": false, 00:19:19.736 "dhchap_key": "key1", 00:19:19.736 "dhchap_ctrlr_key": "ckey1", 00:19:19.736 "method": "bdev_nvme_attach_controller", 00:19:19.736 "req_id": 1 00:19:19.736 } 00:19:19.736 Got JSON-RPC error response 00:19:19.736 response: 00:19:19.736 { 00:19:19.736 "code": -5, 00:19:19.736 "message": "Input/output error" 00:19:19.736 } 00:19:19.736 19:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:19.736 19:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:19.736 19:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:19.736 19:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:19.736 19:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:19.736 19:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.736 19:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.736 19:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.736 19:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 3665100 00:19:19.736 19:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3665100 ']' 00:19:19.736 19:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3665100 00:19:19.736 19:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:19:19.736 19:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:19.736 19:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3665100 00:19:19.736 19:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:19.736 19:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:19.736 19:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3665100' 00:19:19.736 killing process with pid 3665100 00:19:19.736 19:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3665100 00:19:19.736 19:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3665100 00:19:19.736 19:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:19.736 19:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:19.736 19:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:19.736 19:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.736 19:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3692044 00:19:19.736 19:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3692044 00:19:19.736 19:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:19.736 19:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3692044 ']' 00:19:19.736 19:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.736 19:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:19.736 19:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.736 19:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:19.736 19:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.680 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:20.680 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:20.680 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:20.680 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:20.680 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.680 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:20.680 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:20.680 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 3692044 00:19:20.680 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3692044 ']' 00:19:20.680 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.680 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:20.680 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.680 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:20.680 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.942 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:20.942 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:20.942 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:19:20.942 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.942 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.942 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.942 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:19:20.942 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:20.942 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:20.942 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:20.942 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:20.942 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.942 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:20.942 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.942 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.942 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.942 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:20.942 19:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:21.513 00:19:21.513 19:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:21.513 19:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.513 19:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:21.774 19:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.774 19:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.774 19:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.774 19:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.774 19:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.774 19:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:21.774 { 00:19:21.774 "cntlid": 1, 00:19:21.774 "qid": 0, 00:19:21.774 "state": "enabled", 00:19:21.774 "thread": "nvmf_tgt_poll_group_000", 00:19:21.774 "listen_address": { 00:19:21.774 "trtype": "TCP", 00:19:21.774 "adrfam": "IPv4", 00:19:21.774 "traddr": "10.0.0.2", 00:19:21.774 "trsvcid": "4420" 00:19:21.774 }, 00:19:21.774 "peer_address": { 00:19:21.774 "trtype": "TCP", 00:19:21.774 "adrfam": "IPv4", 00:19:21.774 "traddr": "10.0.0.1", 00:19:21.774 "trsvcid": "50822" 00:19:21.774 }, 00:19:21.774 "auth": { 00:19:21.774 "state": "completed", 00:19:21.774 "digest": "sha512", 00:19:21.774 "dhgroup": "ffdhe8192" 00:19:21.774 } 00:19:21.774 } 00:19:21.774 ]' 00:19:21.774 19:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:21.774 19:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:21.774 19:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:21.774 19:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:21.774 19:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:21.774 19:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.774 19:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.774 19:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.035 19:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YzQ2OThjZWYxOTUzN2MxY2FlNWU3MmM4ZWJkMmVmYjFiOGYxODNkYjgyOTljMTNmYTQ3ZGRhZjZkOTgwMTFkMLyIbCg=: 00:19:22.607 19:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.607 19:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:22.607 19:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.607 19:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.607 19:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.607 19:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:22.607 19:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.607 19:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.607 19:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.607 19:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:22.607 19:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:22.868 19:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:22.868 19:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:22.868 19:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:22.868 19:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:22.868 19:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:22.868 19:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:22.868 19:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:22.868 19:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:22.868 19:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:23.129 request: 00:19:23.129 { 00:19:23.129 "name": "nvme0", 00:19:23.129 "trtype": "tcp", 00:19:23.129 "traddr": "10.0.0.2", 00:19:23.129 "adrfam": "ipv4", 00:19:23.129 "trsvcid": "4420", 00:19:23.129 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:23.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:23.129 "prchk_reftag": false, 00:19:23.129 "prchk_guard": false, 00:19:23.129 "hdgst": false, 00:19:23.129 "ddgst": false, 00:19:23.129 "dhchap_key": "key3", 00:19:23.129 "method": "bdev_nvme_attach_controller", 00:19:23.129 "req_id": 1 00:19:23.129 } 00:19:23.129 Got JSON-RPC error response 00:19:23.129 response: 00:19:23.129 { 00:19:23.129 "code": -5, 00:19:23.129 "message": "Input/output error" 00:19:23.129 } 00:19:23.129 19:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:23.129 19:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:23.129 19:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:23.129 19:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:23.129 19:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:19:23.129 19:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:19:23.129 19:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:23.129 19:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:23.129 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:23.129 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:23.129 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:23.129 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:23.129 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:23.129 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:23.129 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:23.129 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:23.129 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:23.390 request: 00:19:23.390 { 00:19:23.390 "name": "nvme0", 00:19:23.390 "trtype": "tcp", 00:19:23.390 "traddr": "10.0.0.2", 00:19:23.390 "adrfam": "ipv4", 00:19:23.390 "trsvcid": "4420", 00:19:23.390 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:23.390 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:23.390 "prchk_reftag": false, 00:19:23.390 "prchk_guard": false, 00:19:23.390 "hdgst": false, 00:19:23.390 "ddgst": false, 00:19:23.390 "dhchap_key": "key3", 00:19:23.390 "method": "bdev_nvme_attach_controller", 00:19:23.390 "req_id": 1 00:19:23.390 } 00:19:23.390 Got JSON-RPC error response 00:19:23.390 response: 00:19:23.390 { 00:19:23.390 "code": -5, 00:19:23.390 "message": "Input/output error" 00:19:23.390 } 00:19:23.390 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:23.390 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:23.390 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:23.390 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:23.390 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:23.390 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:19:23.390 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:23.390 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:23.390 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:23.390 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:23.651 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:23.651 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.651 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.651 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.651 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:23.651 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.651 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.651 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.651 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:23.651 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:23.651 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:23.651 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:23.651 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:23.651 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:23.651 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:23.651 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:23.651 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:23.651 request: 00:19:23.651 { 00:19:23.651 "name": "nvme0", 00:19:23.651 "trtype": "tcp", 00:19:23.651 "traddr": "10.0.0.2", 00:19:23.651 "adrfam": "ipv4", 00:19:23.651 "trsvcid": "4420", 00:19:23.651 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:23.651 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:23.651 "prchk_reftag": false, 00:19:23.651 "prchk_guard": false, 00:19:23.651 "hdgst": false, 00:19:23.651 "ddgst": false, 00:19:23.651 "dhchap_key": "key0", 00:19:23.651 "dhchap_ctrlr_key": "key1", 00:19:23.651 "method": "bdev_nvme_attach_controller", 00:19:23.651 "req_id": 1 00:19:23.651 } 00:19:23.651 Got JSON-RPC error response 00:19:23.651 response: 00:19:23.651 { 00:19:23.651 "code": -5, 00:19:23.651 "message": "Input/output error" 00:19:23.651 } 00:19:23.651 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:23.651 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:23.651 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:23.651 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:23.651 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:23.651 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:23.912 00:19:23.912 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:19:23.912 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:19:23.912 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.172 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.172 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.172 19:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.172 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:19:24.172 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:19:24.172 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3665133 00:19:24.172 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3665133 ']' 00:19:24.172 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3665133 00:19:24.433 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:19:24.433 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:24.433 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3665133 00:19:24.433 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:24.433 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:24.433 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3665133' 00:19:24.433 killing process with pid 3665133 00:19:24.433 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3665133 00:19:24.433 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3665133 00:19:24.433 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:24.433 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:24.433 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:19:24.433 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:24.433 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:19:24.433 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:24.433 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:24.433 rmmod nvme_tcp 00:19:24.695 rmmod nvme_fabrics 00:19:24.695 rmmod nvme_keyring 00:19:24.695 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:24.695 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:19:24.695 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:19:24.695 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3692044 ']' 00:19:24.695 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3692044 00:19:24.695 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3692044 ']' 00:19:24.695 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3692044 00:19:24.695 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:19:24.695 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:24.695 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3692044 00:19:24.695 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:24.695 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:24.695 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3692044' 00:19:24.695 killing process with pid 3692044 00:19:24.695 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3692044 00:19:24.695 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3692044 00:19:24.695 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:24.695 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:24.695 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:24.695 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:24.695 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:24.695 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:24.695 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:24.695 19:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.242 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:27.242 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.NGB /tmp/spdk.key-sha256.dHW /tmp/spdk.key-sha384.Nyp /tmp/spdk.key-sha512.hxR /tmp/spdk.key-sha512.7vG /tmp/spdk.key-sha384.CsD /tmp/spdk.key-sha256.sUl '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:27.242 00:19:27.242 real 2m23.816s 00:19:27.242 user 5m19.708s 00:19:27.242 sys 0m21.526s 00:19:27.242 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:27.242 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.242 ************************************ 00:19:27.242 END TEST nvmf_auth_target 00:19:27.242 ************************************ 00:19:27.242 19:59:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:19:27.242 19:59:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:27.242 19:59:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:19:27.242 19:59:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:27.242 19:59:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:27.242 ************************************ 00:19:27.242 START TEST nvmf_bdevio_no_huge 00:19:27.242 ************************************ 00:19:27.242 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:27.242 * Looking for test storage... 00:19:27.242 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:27.242 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:27.242 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:27.242 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:27.242 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:27.242 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:27.242 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:27.242 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:27.242 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:27.242 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:27.242 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:27.242 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:27.242 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:27.242 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:27.242 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:27.242 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:27.242 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:27.242 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:27.242 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:27.242 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:27.242 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:27.242 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:27.242 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:27.243 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.243 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.243 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.243 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:27.243 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.243 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:19:27.243 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:27.243 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:27.243 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:27.243 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:27.243 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:27.243 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:27.243 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:27.243 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:27.243 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:27.243 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:27.243 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:27.243 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:27.243 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:27.243 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:27.243 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:27.243 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:27.243 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.243 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:27.243 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.243 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:27.243 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:27.243 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:19:27.243 19:59:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:33.841 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:33.841 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:19:33.841 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:33.841 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:33.841 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:33.841 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:33.841 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:33.841 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:19:33.841 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:33.841 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:19:33.841 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:19:33.841 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:19:33.841 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:19:33.841 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:19:33.841 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:19:33.841 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:33.841 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:33.841 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:33.841 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:33.841 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:33.841 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:33.841 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:33.841 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:33.841 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:33.842 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:33.842 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:33.842 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:33.842 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:33.842 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:34.104 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:34.104 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:34.104 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:34.104 19:59:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:34.104 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:34.104 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:34.366 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:34.366 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:34.366 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:19:34.366 00:19:34.366 --- 10.0.0.2 ping statistics --- 00:19:34.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.366 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:19:34.366 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:34.366 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:34.366 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.387 ms 00:19:34.366 00:19:34.366 --- 10.0.0.1 ping statistics --- 00:19:34.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.366 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:19:34.366 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:34.366 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:19:34.366 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:34.366 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:34.366 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:34.366 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:34.366 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:34.366 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:34.366 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:34.366 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:34.366 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:34.366 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:34.366 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:34.366 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3697098 00:19:34.366 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3697098 00:19:34.366 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:34.366 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 3697098 ']' 00:19:34.366 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.366 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:34.367 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.367 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:34.367 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:34.367 [2024-07-24 19:59:22.185263] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:19:34.367 [2024-07-24 19:59:22.185331] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:34.367 [2024-07-24 19:59:22.279306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:34.668 [2024-07-24 19:59:22.387932] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:34.668 [2024-07-24 19:59:22.387984] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:34.668 [2024-07-24 19:59:22.387992] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:34.668 [2024-07-24 19:59:22.387999] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:34.668 [2024-07-24 19:59:22.388005] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:34.668 [2024-07-24 19:59:22.388163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:34.668 [2024-07-24 19:59:22.388326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:19:34.668 [2024-07-24 19:59:22.388750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:19:34.668 [2024-07-24 19:59:22.388755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:35.243 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:35.243 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:19:35.243 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:35.243 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:35.243 19:59:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:35.243 19:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:35.243 19:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:35.243 19:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.243 19:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:35.243 [2024-07-24 19:59:23.037396] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:35.243 19:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.243 19:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:35.243 19:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.243 19:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:35.243 Malloc0 00:19:35.243 19:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.243 19:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:35.243 19:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.243 19:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:35.243 19:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.243 19:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:35.243 19:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.243 19:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:35.243 19:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.243 19:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:35.243 19:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.243 19:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:35.243 [2024-07-24 19:59:23.091232] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:35.243 19:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.243 19:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:35.243 19:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:35.243 19:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:19:35.243 19:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:19:35.243 19:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:35.243 19:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:35.243 { 00:19:35.243 "params": { 00:19:35.243 "name": "Nvme$subsystem", 00:19:35.243 "trtype": "$TEST_TRANSPORT", 00:19:35.243 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:35.243 "adrfam": "ipv4", 00:19:35.243 "trsvcid": "$NVMF_PORT", 00:19:35.243 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:35.243 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:35.243 "hdgst": ${hdgst:-false}, 00:19:35.243 "ddgst": ${ddgst:-false} 00:19:35.243 }, 00:19:35.243 "method": "bdev_nvme_attach_controller" 00:19:35.243 } 00:19:35.243 EOF 00:19:35.243 )") 00:19:35.243 19:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:19:35.243 19:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:19:35.243 19:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:19:35.243 19:59:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:35.243 "params": { 00:19:35.243 "name": "Nvme1", 00:19:35.243 "trtype": "tcp", 00:19:35.243 "traddr": "10.0.0.2", 00:19:35.243 "adrfam": "ipv4", 00:19:35.243 "trsvcid": "4420", 00:19:35.243 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.243 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:35.243 "hdgst": false, 00:19:35.243 "ddgst": false 00:19:35.243 }, 00:19:35.243 "method": "bdev_nvme_attach_controller" 00:19:35.243 }' 00:19:35.243 [2024-07-24 19:59:23.146606] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:19:35.243 [2024-07-24 19:59:23.146679] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3697432 ] 00:19:35.505 [2024-07-24 19:59:23.216091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:35.505 [2024-07-24 19:59:23.312464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.505 [2024-07-24 19:59:23.312580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:35.505 [2024-07-24 19:59:23.312583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.765 I/O targets: 00:19:35.765 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:35.765 00:19:35.765 00:19:35.765 CUnit - A unit testing framework for C - Version 2.1-3 00:19:35.765 http://cunit.sourceforge.net/ 00:19:35.765 00:19:35.765 00:19:35.765 Suite: bdevio tests on: Nvme1n1 00:19:35.765 Test: blockdev write read block ...passed 00:19:35.765 Test: blockdev write zeroes read block ...passed 00:19:35.765 Test: blockdev write zeroes read no split ...passed 00:19:36.026 Test: blockdev write zeroes read split ...passed 00:19:36.026 Test: blockdev write zeroes read split partial ...passed 00:19:36.026 Test: blockdev reset ...[2024-07-24 19:59:23.850663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:36.026 [2024-07-24 19:59:23.850720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21aec10 (9): Bad file descriptor 00:19:36.026 [2024-07-24 19:59:23.910744] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:36.026 passed 00:19:36.026 Test: blockdev write read 8 blocks ...passed 00:19:36.026 Test: blockdev write read size > 128k ...passed 00:19:36.026 Test: blockdev write read invalid size ...passed 00:19:36.027 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:36.027 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:36.027 Test: blockdev write read max offset ...passed 00:19:36.288 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:36.288 Test: blockdev writev readv 8 blocks ...passed 00:19:36.288 Test: blockdev writev readv 30 x 1block ...passed 00:19:36.288 Test: blockdev writev readv block ...passed 00:19:36.288 Test: blockdev writev readv size > 128k ...passed 00:19:36.288 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:36.288 Test: blockdev comparev and writev ...[2024-07-24 19:59:24.132966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:36.288 [2024-07-24 19:59:24.132989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.288 [2024-07-24 19:59:24.133001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:36.288 [2024-07-24 19:59:24.133006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:36.288 [2024-07-24 19:59:24.133415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:36.288 [2024-07-24 19:59:24.133425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:36.288 [2024-07-24 19:59:24.133434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:36.288 [2024-07-24 19:59:24.133440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:36.288 [2024-07-24 19:59:24.133689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:36.288 [2024-07-24 19:59:24.133698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:36.288 [2024-07-24 19:59:24.133708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:36.288 [2024-07-24 19:59:24.133713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:36.288 [2024-07-24 19:59:24.133967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:36.288 [2024-07-24 19:59:24.133975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:36.288 [2024-07-24 19:59:24.133985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:36.288 [2024-07-24 19:59:24.133990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:36.288 passed 00:19:36.288 Test: blockdev nvme passthru rw ...passed 00:19:36.288 Test: blockdev nvme passthru vendor specific ...[2024-07-24 19:59:24.218670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:36.288 [2024-07-24 19:59:24.218686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:36.288 [2024-07-24 19:59:24.218919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:36.288 [2024-07-24 19:59:24.218927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:36.288 [2024-07-24 19:59:24.219143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:36.288 [2024-07-24 19:59:24.219151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:36.288 [2024-07-24 19:59:24.219361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:36.288 [2024-07-24 19:59:24.219370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:36.288 passed 00:19:36.288 Test: blockdev nvme admin passthru ...passed 00:19:36.549 Test: blockdev copy ...passed 00:19:36.549 00:19:36.549 Run Summary: Type Total Ran Passed Failed Inactive 00:19:36.549 suites 1 1 n/a 0 0 00:19:36.549 tests 23 23 23 0 0 00:19:36.549 asserts 152 152 152 0 n/a 00:19:36.549 00:19:36.549 Elapsed time = 1.327 seconds 00:19:36.811 19:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:36.811 19:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.811 19:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:36.811 19:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.811 19:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:36.811 19:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:36.811 19:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:36.811 19:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:19:36.811 19:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:36.811 19:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:19:36.811 19:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:36.811 19:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:36.811 rmmod nvme_tcp 00:19:36.811 rmmod nvme_fabrics 00:19:36.811 rmmod nvme_keyring 00:19:36.811 19:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:36.811 19:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:19:36.811 19:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:19:36.811 19:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3697098 ']' 00:19:36.811 19:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3697098 00:19:36.811 19:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 3697098 ']' 00:19:36.811 19:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 3697098 00:19:36.811 19:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:19:36.811 19:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:36.811 19:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3697098 00:19:36.811 19:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:19:36.811 19:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:19:36.811 19:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3697098' 00:19:36.811 killing process with pid 3697098 00:19:36.811 19:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 3697098 00:19:36.811 19:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 3697098 00:19:37.072 19:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:37.072 19:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:37.072 19:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:37.072 19:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:37.072 19:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:37.072 19:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:37.072 19:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:37.072 19:59:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.619 19:59:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:39.619 00:19:39.619 real 0m12.194s 00:19:39.619 user 0m14.624s 00:19:39.619 sys 0m6.269s 00:19:39.619 19:59:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:39.619 19:59:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:39.619 ************************************ 00:19:39.619 END TEST nvmf_bdevio_no_huge 00:19:39.619 ************************************ 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:39.620 ************************************ 00:19:39.620 START TEST nvmf_tls 00:19:39.620 ************************************ 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:39.620 * Looking for test storage... 00:19:39.620 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:19:39.620 19:59:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.218 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:46.218 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:19:46.218 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:46.219 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:46.219 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:46.219 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:46.219 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:46.219 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:46.481 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:46.481 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:46.481 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:46.481 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:46.481 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:46.481 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:46.481 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:46.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:46.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:19:46.481 00:19:46.481 --- 10.0.0.2 ping statistics --- 00:19:46.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.481 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:19:46.481 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:46.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:46.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.394 ms 00:19:46.481 00:19:46.481 --- 10.0.0.1 ping statistics --- 00:19:46.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.481 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:19:46.481 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:46.481 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:19:46.481 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:46.481 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:46.481 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:46.481 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:46.481 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:46.481 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:46.481 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:46.481 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:46.481 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:46.481 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:46.481 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.481 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3701781 00:19:46.481 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3701781 00:19:46.481 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:46.481 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3701781 ']' 00:19:46.481 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.481 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:46.481 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.481 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:46.481 19:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.742 [2024-07-24 19:59:34.458850] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:19:46.742 [2024-07-24 19:59:34.458913] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.742 EAL: No free 2048 kB hugepages reported on node 1 00:19:46.742 [2024-07-24 19:59:34.547323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.742 [2024-07-24 19:59:34.638887] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:46.743 [2024-07-24 19:59:34.638946] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:46.743 [2024-07-24 19:59:34.638954] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:46.743 [2024-07-24 19:59:34.638961] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:46.743 [2024-07-24 19:59:34.638967] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:46.743 [2024-07-24 19:59:34.638991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:47.314 19:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:47.314 19:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:47.314 19:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:47.314 19:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:47.314 19:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:47.575 19:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:47.575 19:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:19:47.575 19:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:47.575 true 00:19:47.575 19:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:47.575 19:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:19:47.837 19:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:19:47.837 19:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:19:47.837 19:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:48.098 19:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:48.098 19:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:19:48.098 19:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:19:48.098 19:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:19:48.098 19:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:48.359 19:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:48.359 19:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:19:48.620 19:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:19:48.620 19:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:19:48.620 19:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:48.620 19:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:19:48.620 19:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:19:48.620 19:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:19:48.620 19:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:48.882 19:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:48.882 19:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:19:48.882 19:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:19:48.882 19:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:19:48.882 19:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:49.143 19:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:49.143 19:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:19:49.405 19:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:19:49.405 19:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:19:49.405 19:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:49.405 19:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:49.405 19:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:49.405 19:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:49.405 19:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:19:49.405 19:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:49.405 19:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:49.405 19:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:49.405 19:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:49.405 19:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:49.405 19:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:49.405 19:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:49.405 19:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:19:49.405 19:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:49.405 19:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:49.405 19:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:49.405 19:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:19:49.405 19:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.v14o9CVlcL 00:19:49.405 19:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:49.405 19:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.22PpUHi965 00:19:49.405 19:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:49.405 19:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:49.405 19:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.v14o9CVlcL 00:19:49.405 19:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.22PpUHi965 00:19:49.405 19:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:49.666 19:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:49.927 19:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.v14o9CVlcL 00:19:49.927 19:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.v14o9CVlcL 00:19:49.927 19:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:49.927 [2024-07-24 19:59:37.812821] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:49.927 19:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:50.188 19:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:50.188 [2024-07-24 19:59:38.125583] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:50.188 [2024-07-24 19:59:38.125764] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:50.188 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:50.448 malloc0 00:19:50.448 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:50.709 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.v14o9CVlcL 00:19:50.709 [2024-07-24 19:59:38.596778] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:50.710 19:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.v14o9CVlcL 00:19:50.710 EAL: No free 2048 kB hugepages reported on node 1 00:20:00.773 Initializing NVMe Controllers 00:20:00.773 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:00.773 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:00.773 Initialization complete. Launching workers. 00:20:00.773 ======================================================== 00:20:00.773 Latency(us) 00:20:00.773 Device Information : IOPS MiB/s Average min max 00:20:00.773 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18803.20 73.45 3403.73 1093.05 4891.45 00:20:00.773 ======================================================== 00:20:00.773 Total : 18803.20 73.45 3403.73 1093.05 4891.45 00:20:00.773 00:20:00.773 19:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.v14o9CVlcL 00:20:01.034 19:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:01.034 19:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:01.034 19:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:01.034 19:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.v14o9CVlcL' 00:20:01.034 19:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:01.034 19:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3704523 00:20:01.034 19:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:01.034 19:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3704523 /var/tmp/bdevperf.sock 00:20:01.034 19:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:01.034 19:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3704523 ']' 00:20:01.034 19:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:01.034 19:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:01.034 19:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:01.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:01.034 19:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:01.034 19:59:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.034 [2024-07-24 19:59:48.777232] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:20:01.034 [2024-07-24 19:59:48.777289] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3704523 ] 00:20:01.034 EAL: No free 2048 kB hugepages reported on node 1 00:20:01.034 [2024-07-24 19:59:48.826655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.034 [2024-07-24 19:59:48.879264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:01.606 19:59:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:01.606 19:59:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:01.606 19:59:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.v14o9CVlcL 00:20:01.867 [2024-07-24 19:59:49.664370] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:01.867 [2024-07-24 19:59:49.664424] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:01.867 TLSTESTn1 00:20:01.867 19:59:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:02.128 Running I/O for 10 seconds... 00:20:12.142 00:20:12.142 Latency(us) 00:20:12.142 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.142 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:12.142 Verification LBA range: start 0x0 length 0x2000 00:20:12.142 TLSTESTn1 : 10.06 2446.38 9.56 0.00 0.00 52170.41 5707.09 117964.80 00:20:12.142 =================================================================================================================== 00:20:12.142 Total : 2446.38 9.56 0.00 0.00 52170.41 5707.09 117964.80 00:20:12.142 0 00:20:12.142 19:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:12.142 19:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 3704523 00:20:12.142 19:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3704523 ']' 00:20:12.142 19:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3704523 00:20:12.142 19:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:12.142 19:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:12.142 19:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3704523 00:20:12.142 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:12.142 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:12.142 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3704523' 00:20:12.142 killing process with pid 3704523 00:20:12.142 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3704523 00:20:12.142 Received shutdown signal, test time was about 10.000000 seconds 00:20:12.142 00:20:12.142 Latency(us) 00:20:12.142 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.142 =================================================================================================================== 00:20:12.142 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:12.142 [2024-07-24 20:00:00.017482] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:12.142 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3704523 00:20:12.403 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.22PpUHi965 00:20:12.403 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:12.403 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.22PpUHi965 00:20:12.403 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:12.403 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:12.403 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:12.403 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:12.403 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.22PpUHi965 00:20:12.403 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:12.403 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:12.403 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:12.403 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.22PpUHi965' 00:20:12.403 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:12.403 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3706872 00:20:12.403 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:12.403 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3706872 /var/tmp/bdevperf.sock 00:20:12.403 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:12.403 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3706872 ']' 00:20:12.403 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:12.403 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:12.403 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:12.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:12.403 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:12.403 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.403 [2024-07-24 20:00:00.194484] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:20:12.403 [2024-07-24 20:00:00.194560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3706872 ] 00:20:12.403 EAL: No free 2048 kB hugepages reported on node 1 00:20:12.403 [2024-07-24 20:00:00.247293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.403 [2024-07-24 20:00:00.299404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:12.665 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:12.665 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:12.665 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.22PpUHi965 00:20:12.665 [2024-07-24 20:00:00.518966] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:12.665 [2024-07-24 20:00:00.519019] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:12.665 [2024-07-24 20:00:00.525438] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:12.665 [2024-07-24 20:00:00.525889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x253bec0 (107): Transport endpoint is not connected 00:20:12.665 [2024-07-24 20:00:00.526884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x253bec0 (9): Bad file descriptor 00:20:12.665 [2024-07-24 20:00:00.527886] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:12.665 [2024-07-24 20:00:00.527894] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:12.665 [2024-07-24 20:00:00.527901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:12.665 request: 00:20:12.665 { 00:20:12.665 "name": "TLSTEST", 00:20:12.665 "trtype": "tcp", 00:20:12.665 "traddr": "10.0.0.2", 00:20:12.665 "adrfam": "ipv4", 00:20:12.665 "trsvcid": "4420", 00:20:12.665 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.665 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:12.665 "prchk_reftag": false, 00:20:12.665 "prchk_guard": false, 00:20:12.665 "hdgst": false, 00:20:12.665 "ddgst": false, 00:20:12.665 "psk": "/tmp/tmp.22PpUHi965", 00:20:12.665 "method": "bdev_nvme_attach_controller", 00:20:12.665 "req_id": 1 00:20:12.665 } 00:20:12.665 Got JSON-RPC error response 00:20:12.665 response: 00:20:12.665 { 00:20:12.665 "code": -5, 00:20:12.665 "message": "Input/output error" 00:20:12.665 } 00:20:12.665 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3706872 00:20:12.665 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3706872 ']' 00:20:12.665 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3706872 00:20:12.665 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:12.665 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:12.665 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3706872 00:20:12.665 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:12.665 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:12.665 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3706872' 00:20:12.665 killing process with pid 3706872 00:20:12.665 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3706872 00:20:12.665 Received shutdown signal, test time was about 10.000000 seconds 00:20:12.665 00:20:12.665 Latency(us) 00:20:12.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.665 =================================================================================================================== 00:20:12.665 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:12.665 [2024-07-24 20:00:00.598332] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:12.665 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3706872 00:20:12.925 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:12.925 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:12.925 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:12.925 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:12.925 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:12.926 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.v14o9CVlcL 00:20:12.926 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:12.926 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.v14o9CVlcL 00:20:12.926 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:12.926 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:12.926 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:12.926 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:12.926 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.v14o9CVlcL 00:20:12.926 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:12.926 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:12.926 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:12.926 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.v14o9CVlcL' 00:20:12.926 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:12.926 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3706910 00:20:12.926 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:12.926 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3706910 /var/tmp/bdevperf.sock 00:20:12.926 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:12.926 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3706910 ']' 00:20:12.926 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:12.926 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:12.926 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:12.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:12.926 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:12.926 20:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.926 [2024-07-24 20:00:00.764728] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:20:12.926 [2024-07-24 20:00:00.764804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3706910 ] 00:20:12.926 EAL: No free 2048 kB hugepages reported on node 1 00:20:12.926 [2024-07-24 20:00:00.814163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.926 [2024-07-24 20:00:00.866856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:13.867 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:13.867 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:13.867 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.v14o9CVlcL 00:20:13.867 [2024-07-24 20:00:01.659866] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:13.867 [2024-07-24 20:00:01.659921] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:13.867 [2024-07-24 20:00:01.668139] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:13.867 [2024-07-24 20:00:01.668159] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:13.867 [2024-07-24 20:00:01.668178] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:13.867 [2024-07-24 20:00:01.669032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cfec0 (107): Transport endpoint is not connected 00:20:13.867 [2024-07-24 20:00:01.670028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cfec0 (9): Bad file descriptor 00:20:13.867 [2024-07-24 20:00:01.671030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:13.867 [2024-07-24 20:00:01.671036] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:13.867 [2024-07-24 20:00:01.671047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:13.867 request: 00:20:13.867 { 00:20:13.867 "name": "TLSTEST", 00:20:13.867 "trtype": "tcp", 00:20:13.867 "traddr": "10.0.0.2", 00:20:13.867 "adrfam": "ipv4", 00:20:13.867 "trsvcid": "4420", 00:20:13.867 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.867 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:13.867 "prchk_reftag": false, 00:20:13.867 "prchk_guard": false, 00:20:13.867 "hdgst": false, 00:20:13.867 "ddgst": false, 00:20:13.867 "psk": "/tmp/tmp.v14o9CVlcL", 00:20:13.867 "method": "bdev_nvme_attach_controller", 00:20:13.867 "req_id": 1 00:20:13.867 } 00:20:13.867 Got JSON-RPC error response 00:20:13.867 response: 00:20:13.867 { 00:20:13.867 "code": -5, 00:20:13.867 "message": "Input/output error" 00:20:13.867 } 00:20:13.867 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3706910 00:20:13.867 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3706910 ']' 00:20:13.867 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3706910 00:20:13.867 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:13.867 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:13.867 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3706910 00:20:13.867 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:13.867 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:13.867 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3706910' 00:20:13.867 killing process with pid 3706910 00:20:13.867 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3706910 00:20:13.867 Received shutdown signal, test time was about 10.000000 seconds 00:20:13.867 00:20:13.867 Latency(us) 00:20:13.867 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.867 =================================================================================================================== 00:20:13.867 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:13.867 [2024-07-24 20:00:01.740844] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:13.867 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3706910 00:20:14.126 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:14.126 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:14.126 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:14.126 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:14.126 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:14.126 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.v14o9CVlcL 00:20:14.126 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:14.126 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.v14o9CVlcL 00:20:14.126 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:14.126 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:14.126 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:14.126 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:14.126 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.v14o9CVlcL 00:20:14.126 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:14.126 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:14.126 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:14.126 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.v14o9CVlcL' 00:20:14.126 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:14.126 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3707316 00:20:14.126 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:14.126 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3707316 /var/tmp/bdevperf.sock 00:20:14.126 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:14.126 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3707316 ']' 00:20:14.126 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:14.126 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:14.126 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:14.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:14.126 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:14.126 20:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.126 [2024-07-24 20:00:01.910318] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:20:14.126 [2024-07-24 20:00:01.910386] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3707316 ] 00:20:14.126 EAL: No free 2048 kB hugepages reported on node 1 00:20:14.126 [2024-07-24 20:00:01.960049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.126 [2024-07-24 20:00:02.012155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:15.064 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:15.064 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:15.064 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.v14o9CVlcL 00:20:15.064 [2024-07-24 20:00:02.785059] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:15.064 [2024-07-24 20:00:02.785114] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:15.064 [2024-07-24 20:00:02.790852] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:15.064 [2024-07-24 20:00:02.790868] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:15.064 [2024-07-24 20:00:02.790887] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:15.064 [2024-07-24 20:00:02.792106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb73ec0 (107): Transport endpoint is not connected 00:20:15.064 [2024-07-24 20:00:02.793101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb73ec0 (9): Bad file descriptor 00:20:15.064 [2024-07-24 20:00:02.794103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:15.064 [2024-07-24 20:00:02.794113] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:15.064 [2024-07-24 20:00:02.794119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:15.064 request: 00:20:15.064 { 00:20:15.064 "name": "TLSTEST", 00:20:15.064 "trtype": "tcp", 00:20:15.064 "traddr": "10.0.0.2", 00:20:15.064 "adrfam": "ipv4", 00:20:15.064 "trsvcid": "4420", 00:20:15.064 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:15.064 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:15.064 "prchk_reftag": false, 00:20:15.064 "prchk_guard": false, 00:20:15.064 "hdgst": false, 00:20:15.064 "ddgst": false, 00:20:15.064 "psk": "/tmp/tmp.v14o9CVlcL", 00:20:15.064 "method": "bdev_nvme_attach_controller", 00:20:15.064 "req_id": 1 00:20:15.064 } 00:20:15.064 Got JSON-RPC error response 00:20:15.064 response: 00:20:15.064 { 00:20:15.064 "code": -5, 00:20:15.064 "message": "Input/output error" 00:20:15.064 } 00:20:15.064 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3707316 00:20:15.064 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3707316 ']' 00:20:15.064 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3707316 00:20:15.064 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:15.064 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:15.064 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3707316 00:20:15.064 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:15.065 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:15.065 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3707316' 00:20:15.065 killing process with pid 3707316 00:20:15.065 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3707316 00:20:15.065 Received shutdown signal, test time was about 10.000000 seconds 00:20:15.065 00:20:15.065 Latency(us) 00:20:15.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.065 =================================================================================================================== 00:20:15.065 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:15.065 [2024-07-24 20:00:02.866510] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:15.065 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3707316 00:20:15.065 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:15.065 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:15.065 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:15.065 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:15.065 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:15.065 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:15.065 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:15.065 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:15.065 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:15.065 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:15.065 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:15.065 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:15.065 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:15.065 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:15.065 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:15.065 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:15.065 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:15.065 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:15.065 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3707507 00:20:15.065 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:15.065 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3707507 /var/tmp/bdevperf.sock 00:20:15.065 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:15.065 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3707507 ']' 00:20:15.065 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:15.065 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:15.065 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:15.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:15.065 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:15.065 20:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.326 [2024-07-24 20:00:03.023076] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:20:15.326 [2024-07-24 20:00:03.023135] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3707507 ] 00:20:15.326 EAL: No free 2048 kB hugepages reported on node 1 00:20:15.326 [2024-07-24 20:00:03.072237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.326 [2024-07-24 20:00:03.124215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:15.902 20:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:15.902 20:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:15.902 20:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:16.164 [2024-07-24 20:00:03.933664] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:16.164 [2024-07-24 20:00:03.935964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13294a0 (9): Bad file descriptor 00:20:16.164 [2024-07-24 20:00:03.936964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:16.164 [2024-07-24 20:00:03.936971] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:16.164 [2024-07-24 20:00:03.936977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:16.164 request: 00:20:16.164 { 00:20:16.164 "name": "TLSTEST", 00:20:16.164 "trtype": "tcp", 00:20:16.164 "traddr": "10.0.0.2", 00:20:16.164 "adrfam": "ipv4", 00:20:16.164 "trsvcid": "4420", 00:20:16.164 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.164 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:16.164 "prchk_reftag": false, 00:20:16.164 "prchk_guard": false, 00:20:16.164 "hdgst": false, 00:20:16.164 "ddgst": false, 00:20:16.164 "method": "bdev_nvme_attach_controller", 00:20:16.164 "req_id": 1 00:20:16.164 } 00:20:16.164 Got JSON-RPC error response 00:20:16.164 response: 00:20:16.164 { 00:20:16.164 "code": -5, 00:20:16.164 "message": "Input/output error" 00:20:16.164 } 00:20:16.164 20:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3707507 00:20:16.164 20:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3707507 ']' 00:20:16.164 20:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3707507 00:20:16.164 20:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:16.164 20:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:16.164 20:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3707507 00:20:16.164 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:16.164 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:16.164 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3707507' 00:20:16.164 killing process with pid 3707507 00:20:16.164 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3707507 00:20:16.164 Received shutdown signal, test time was about 10.000000 seconds 00:20:16.164 00:20:16.164 Latency(us) 00:20:16.164 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.164 =================================================================================================================== 00:20:16.164 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:16.164 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3707507 00:20:16.425 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:16.425 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:16.425 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:16.425 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:16.425 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:16.425 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 3701781 00:20:16.425 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3701781 ']' 00:20:16.425 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3701781 00:20:16.425 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:16.425 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:16.425 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3701781 00:20:16.425 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:16.425 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:16.426 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3701781' 00:20:16.426 killing process with pid 3701781 00:20:16.426 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3701781 00:20:16.426 [2024-07-24 20:00:04.180698] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:16.426 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3701781 00:20:16.426 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:16.426 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:16.426 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:16.426 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:16.426 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:16.426 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:20:16.426 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:16.426 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:16.426 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:20:16.426 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.LBIMIM1KRA 00:20:16.426 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:16.426 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.LBIMIM1KRA 00:20:16.426 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:20:16.426 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:16.426 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:16.426 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.426 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3707712 00:20:16.426 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3707712 00:20:16.426 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:16.426 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3707712 ']' 00:20:16.426 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.426 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:16.426 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.426 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:16.426 20:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.686 [2024-07-24 20:00:04.423830] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:20:16.686 [2024-07-24 20:00:04.423896] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.686 EAL: No free 2048 kB hugepages reported on node 1 00:20:16.686 [2024-07-24 20:00:04.509753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.686 [2024-07-24 20:00:04.568127] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:16.686 [2024-07-24 20:00:04.568161] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:16.686 [2024-07-24 20:00:04.568167] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:16.686 [2024-07-24 20:00:04.568172] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:16.686 [2024-07-24 20:00:04.568176] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:16.686 [2024-07-24 20:00:04.568190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.257 20:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:17.258 20:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:17.258 20:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:17.258 20:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:17.258 20:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.519 20:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:17.519 20:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.LBIMIM1KRA 00:20:17.519 20:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.LBIMIM1KRA 00:20:17.519 20:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:17.519 [2024-07-24 20:00:05.359592] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:17.519 20:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:17.780 20:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:17.780 [2024-07-24 20:00:05.660334] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:17.780 [2024-07-24 20:00:05.660503] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:17.780 20:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:18.041 malloc0 00:20:18.041 20:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:18.041 20:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LBIMIM1KRA 00:20:18.302 [2024-07-24 20:00:06.115204] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:18.302 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LBIMIM1KRA 00:20:18.302 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:18.302 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:18.302 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:18.302 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.LBIMIM1KRA' 00:20:18.302 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:18.302 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3708134 00:20:18.302 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:18.302 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3708134 /var/tmp/bdevperf.sock 00:20:18.302 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:18.302 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3708134 ']' 00:20:18.302 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:18.302 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:18.302 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:18.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:18.302 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:18.302 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.302 [2024-07-24 20:00:06.178954] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:20:18.302 [2024-07-24 20:00:06.179010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3708134 ] 00:20:18.302 EAL: No free 2048 kB hugepages reported on node 1 00:20:18.302 [2024-07-24 20:00:06.229054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.562 [2024-07-24 20:00:06.281333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.133 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:19.133 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:19.133 20:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LBIMIM1KRA 00:20:19.133 [2024-07-24 20:00:07.086364] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:19.133 [2024-07-24 20:00:07.086424] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:19.393 TLSTESTn1 00:20:19.393 20:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:19.393 Running I/O for 10 seconds... 00:20:29.432 00:20:29.432 Latency(us) 00:20:29.432 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.432 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:29.432 Verification LBA range: start 0x0 length 0x2000 00:20:29.432 TLSTESTn1 : 10.05 2279.22 8.90 0.00 0.00 56012.72 4805.97 171267.41 00:20:29.432 =================================================================================================================== 00:20:29.432 Total : 2279.22 8.90 0.00 0.00 56012.72 4805.97 171267.41 00:20:29.432 0 00:20:29.432 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:29.432 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 3708134 00:20:29.432 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3708134 ']' 00:20:29.432 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3708134 00:20:29.694 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:29.694 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:29.694 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3708134 00:20:29.694 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:29.694 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:29.694 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3708134' 00:20:29.694 killing process with pid 3708134 00:20:29.694 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3708134 00:20:29.694 Received shutdown signal, test time was about 10.000000 seconds 00:20:29.694 00:20:29.694 Latency(us) 00:20:29.694 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.694 =================================================================================================================== 00:20:29.694 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:29.694 [2024-07-24 20:00:17.438639] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:29.694 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3708134 00:20:29.694 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.LBIMIM1KRA 00:20:29.694 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LBIMIM1KRA 00:20:29.694 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:29.694 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LBIMIM1KRA 00:20:29.694 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:29.694 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:29.694 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:29.694 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:29.694 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LBIMIM1KRA 00:20:29.694 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:29.694 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:29.694 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:29.694 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.LBIMIM1KRA' 00:20:29.694 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:29.694 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3710851 00:20:29.694 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:29.694 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3710851 /var/tmp/bdevperf.sock 00:20:29.694 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:29.694 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3710851 ']' 00:20:29.694 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:29.694 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:29.694 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:29.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:29.694 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:29.694 20:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.694 [2024-07-24 20:00:17.608935] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:20:29.694 [2024-07-24 20:00:17.608990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3710851 ] 00:20:29.694 EAL: No free 2048 kB hugepages reported on node 1 00:20:29.955 [2024-07-24 20:00:17.658892] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.955 [2024-07-24 20:00:17.709475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:30.525 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:30.525 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:30.525 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LBIMIM1KRA 00:20:30.787 [2024-07-24 20:00:18.518606] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:30.787 [2024-07-24 20:00:18.518650] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:30.787 [2024-07-24 20:00:18.518655] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.LBIMIM1KRA 00:20:30.787 request: 00:20:30.787 { 00:20:30.787 "name": "TLSTEST", 00:20:30.787 "trtype": "tcp", 00:20:30.787 "traddr": "10.0.0.2", 00:20:30.787 "adrfam": "ipv4", 00:20:30.787 "trsvcid": "4420", 00:20:30.787 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:30.787 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:30.787 "prchk_reftag": false, 00:20:30.787 "prchk_guard": false, 00:20:30.787 "hdgst": false, 00:20:30.787 "ddgst": false, 00:20:30.787 "psk": "/tmp/tmp.LBIMIM1KRA", 00:20:30.787 "method": "bdev_nvme_attach_controller", 00:20:30.787 "req_id": 1 00:20:30.787 } 00:20:30.787 Got JSON-RPC error response 00:20:30.787 response: 00:20:30.787 { 00:20:30.787 "code": -1, 00:20:30.787 "message": "Operation not permitted" 00:20:30.787 } 00:20:30.787 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3710851 00:20:30.787 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3710851 ']' 00:20:30.787 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3710851 00:20:30.787 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:30.787 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:30.787 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3710851 00:20:30.787 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:30.787 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:30.787 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3710851' 00:20:30.787 killing process with pid 3710851 00:20:30.787 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3710851 00:20:30.787 Received shutdown signal, test time was about 10.000000 seconds 00:20:30.787 00:20:30.787 Latency(us) 00:20:30.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:30.787 =================================================================================================================== 00:20:30.787 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:30.787 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3710851 00:20:30.787 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:30.787 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:30.787 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:30.787 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:30.787 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:30.787 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 3707712 00:20:30.787 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3707712 ']' 00:20:30.787 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3707712 00:20:30.787 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:30.787 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:30.787 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3707712 00:20:31.049 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:31.049 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:31.049 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3707712' 00:20:31.049 killing process with pid 3707712 00:20:31.049 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3707712 00:20:31.049 [2024-07-24 20:00:18.767208] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:31.049 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3707712 00:20:31.049 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:20:31.049 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:31.049 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:31.049 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:31.049 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3711198 00:20:31.049 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3711198 00:20:31.049 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:31.049 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3711198 ']' 00:20:31.049 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:31.049 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:31.049 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:31.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:31.049 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:31.049 20:00:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:31.049 [2024-07-24 20:00:18.947279] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:20:31.049 [2024-07-24 20:00:18.947331] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:31.049 EAL: No free 2048 kB hugepages reported on node 1 00:20:31.309 [2024-07-24 20:00:19.027802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.309 [2024-07-24 20:00:19.079658] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:31.309 [2024-07-24 20:00:19.079694] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:31.309 [2024-07-24 20:00:19.079700] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:31.309 [2024-07-24 20:00:19.079705] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:31.309 [2024-07-24 20:00:19.079709] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:31.309 [2024-07-24 20:00:19.079732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:31.881 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:31.881 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:31.881 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:31.881 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:31.881 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:31.881 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:31.881 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.LBIMIM1KRA 00:20:31.881 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:31.881 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.LBIMIM1KRA 00:20:31.881 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:20:31.881 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:31.881 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:20:31.881 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:31.881 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.LBIMIM1KRA 00:20:31.881 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.LBIMIM1KRA 00:20:31.881 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:32.142 [2024-07-24 20:00:19.885295] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:32.142 20:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:32.142 20:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:32.402 [2024-07-24 20:00:20.198071] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:32.402 [2024-07-24 20:00:20.198265] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:32.402 20:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:32.662 malloc0 00:20:32.662 20:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:32.662 20:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LBIMIM1KRA 00:20:32.923 [2024-07-24 20:00:20.681206] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:32.923 [2024-07-24 20:00:20.681228] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:32.923 [2024-07-24 20:00:20.681247] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:32.923 request: 00:20:32.923 { 00:20:32.923 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:32.923 "host": "nqn.2016-06.io.spdk:host1", 00:20:32.923 "psk": "/tmp/tmp.LBIMIM1KRA", 00:20:32.923 "method": "nvmf_subsystem_add_host", 00:20:32.923 "req_id": 1 00:20:32.923 } 00:20:32.923 Got JSON-RPC error response 00:20:32.923 response: 00:20:32.923 { 00:20:32.923 "code": -32603, 00:20:32.923 "message": "Internal error" 00:20:32.923 } 00:20:32.923 20:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:32.923 20:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:32.923 20:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:32.923 20:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:32.923 20:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 3711198 00:20:32.923 20:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3711198 ']' 00:20:32.923 20:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3711198 00:20:32.923 20:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:32.923 20:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:32.923 20:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3711198 00:20:32.923 20:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:32.923 20:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:32.923 20:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3711198' 00:20:32.923 killing process with pid 3711198 00:20:32.923 20:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3711198 00:20:32.923 20:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3711198 00:20:33.185 20:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.LBIMIM1KRA 00:20:33.185 20:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:33.185 20:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:33.185 20:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:33.185 20:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.185 20:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3711570 00:20:33.185 20:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:33.185 20:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3711570 00:20:33.185 20:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3711570 ']' 00:20:33.185 20:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.185 20:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:33.185 20:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.185 20:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:33.185 20:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.185 [2024-07-24 20:00:20.961238] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:20:33.185 [2024-07-24 20:00:20.961306] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:33.185 EAL: No free 2048 kB hugepages reported on node 1 00:20:33.185 [2024-07-24 20:00:21.043691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.185 [2024-07-24 20:00:21.097366] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:33.185 [2024-07-24 20:00:21.097397] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:33.185 [2024-07-24 20:00:21.097402] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:33.185 [2024-07-24 20:00:21.097407] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:33.185 [2024-07-24 20:00:21.097410] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:33.185 [2024-07-24 20:00:21.097424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:34.128 20:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:34.128 20:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:34.128 20:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:34.128 20:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:34.128 20:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.128 20:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:34.128 20:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.LBIMIM1KRA 00:20:34.128 20:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.LBIMIM1KRA 00:20:34.128 20:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:34.128 [2024-07-24 20:00:21.894856] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:34.128 20:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:34.128 20:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:34.388 [2024-07-24 20:00:22.203629] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:34.388 [2024-07-24 20:00:22.203807] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:34.388 20:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:34.649 malloc0 00:20:34.649 20:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:34.649 20:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LBIMIM1KRA 00:20:34.909 [2024-07-24 20:00:22.674757] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:34.909 20:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3711926 00:20:34.909 20:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:34.909 20:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:34.909 20:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3711926 /var/tmp/bdevperf.sock 00:20:34.909 20:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3711926 ']' 00:20:34.909 20:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:34.909 20:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:34.909 20:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:34.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:34.909 20:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:34.909 20:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.909 [2024-07-24 20:00:22.736813] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:20:34.909 [2024-07-24 20:00:22.736864] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3711926 ] 00:20:34.909 EAL: No free 2048 kB hugepages reported on node 1 00:20:34.909 [2024-07-24 20:00:22.785472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.909 [2024-07-24 20:00:22.838107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:35.850 20:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:35.850 20:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:35.850 20:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LBIMIM1KRA 00:20:35.850 [2024-07-24 20:00:23.643159] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:35.850 [2024-07-24 20:00:23.643219] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:35.850 TLSTESTn1 00:20:35.850 20:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:36.112 20:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:20:36.112 "subsystems": [ 00:20:36.112 { 00:20:36.112 "subsystem": "keyring", 00:20:36.112 "config": [] 00:20:36.112 }, 00:20:36.112 { 00:20:36.112 "subsystem": "iobuf", 00:20:36.112 "config": [ 00:20:36.112 { 00:20:36.112 "method": "iobuf_set_options", 00:20:36.113 "params": { 00:20:36.113 "small_pool_count": 8192, 00:20:36.113 "large_pool_count": 1024, 00:20:36.113 "small_bufsize": 8192, 00:20:36.113 "large_bufsize": 135168 00:20:36.113 } 00:20:36.113 } 00:20:36.113 ] 00:20:36.113 }, 00:20:36.113 { 00:20:36.113 "subsystem": "sock", 00:20:36.113 "config": [ 00:20:36.113 { 00:20:36.113 "method": "sock_set_default_impl", 00:20:36.113 "params": { 00:20:36.113 "impl_name": "posix" 00:20:36.113 } 00:20:36.113 }, 00:20:36.113 { 00:20:36.113 "method": "sock_impl_set_options", 00:20:36.113 "params": { 00:20:36.113 "impl_name": "ssl", 00:20:36.113 "recv_buf_size": 4096, 00:20:36.113 "send_buf_size": 4096, 00:20:36.113 "enable_recv_pipe": true, 00:20:36.113 "enable_quickack": false, 00:20:36.113 "enable_placement_id": 0, 00:20:36.113 "enable_zerocopy_send_server": true, 00:20:36.113 "enable_zerocopy_send_client": false, 00:20:36.113 "zerocopy_threshold": 0, 00:20:36.113 "tls_version": 0, 00:20:36.113 "enable_ktls": false 00:20:36.113 } 00:20:36.113 }, 00:20:36.113 { 00:20:36.113 "method": "sock_impl_set_options", 00:20:36.113 "params": { 00:20:36.113 "impl_name": "posix", 00:20:36.113 "recv_buf_size": 2097152, 00:20:36.113 "send_buf_size": 2097152, 00:20:36.113 "enable_recv_pipe": true, 00:20:36.113 "enable_quickack": false, 00:20:36.113 "enable_placement_id": 0, 00:20:36.113 "enable_zerocopy_send_server": true, 00:20:36.113 "enable_zerocopy_send_client": false, 00:20:36.113 "zerocopy_threshold": 0, 00:20:36.113 "tls_version": 0, 00:20:36.113 "enable_ktls": false 00:20:36.113 } 00:20:36.113 } 00:20:36.113 ] 00:20:36.113 }, 00:20:36.113 { 00:20:36.113 "subsystem": "vmd", 00:20:36.113 "config": [] 00:20:36.113 }, 00:20:36.113 { 00:20:36.113 "subsystem": "accel", 00:20:36.113 "config": [ 00:20:36.113 { 00:20:36.113 "method": "accel_set_options", 00:20:36.113 "params": { 00:20:36.113 "small_cache_size": 128, 00:20:36.113 "large_cache_size": 16, 00:20:36.113 "task_count": 2048, 00:20:36.113 "sequence_count": 2048, 00:20:36.113 "buf_count": 2048 00:20:36.113 } 00:20:36.113 } 00:20:36.113 ] 00:20:36.113 }, 00:20:36.113 { 00:20:36.113 "subsystem": "bdev", 00:20:36.113 "config": [ 00:20:36.113 { 00:20:36.113 "method": "bdev_set_options", 00:20:36.113 "params": { 00:20:36.113 "bdev_io_pool_size": 65535, 00:20:36.113 "bdev_io_cache_size": 256, 00:20:36.113 "bdev_auto_examine": true, 00:20:36.113 "iobuf_small_cache_size": 128, 00:20:36.113 "iobuf_large_cache_size": 16 00:20:36.113 } 00:20:36.113 }, 00:20:36.113 { 00:20:36.113 "method": "bdev_raid_set_options", 00:20:36.113 "params": { 00:20:36.113 "process_window_size_kb": 1024, 00:20:36.113 "process_max_bandwidth_mb_sec": 0 00:20:36.113 } 00:20:36.113 }, 00:20:36.113 { 00:20:36.113 "method": "bdev_iscsi_set_options", 00:20:36.113 "params": { 00:20:36.113 "timeout_sec": 30 00:20:36.113 } 00:20:36.113 }, 00:20:36.113 { 00:20:36.113 "method": "bdev_nvme_set_options", 00:20:36.113 "params": { 00:20:36.113 "action_on_timeout": "none", 00:20:36.113 "timeout_us": 0, 00:20:36.113 "timeout_admin_us": 0, 00:20:36.113 "keep_alive_timeout_ms": 10000, 00:20:36.113 "arbitration_burst": 0, 00:20:36.113 "low_priority_weight": 0, 00:20:36.113 "medium_priority_weight": 0, 00:20:36.113 "high_priority_weight": 0, 00:20:36.113 "nvme_adminq_poll_period_us": 10000, 00:20:36.113 "nvme_ioq_poll_period_us": 0, 00:20:36.113 "io_queue_requests": 0, 00:20:36.113 "delay_cmd_submit": true, 00:20:36.113 "transport_retry_count": 4, 00:20:36.113 "bdev_retry_count": 3, 00:20:36.113 "transport_ack_timeout": 0, 00:20:36.113 "ctrlr_loss_timeout_sec": 0, 00:20:36.113 "reconnect_delay_sec": 0, 00:20:36.113 "fast_io_fail_timeout_sec": 0, 00:20:36.113 "disable_auto_failback": false, 00:20:36.113 "generate_uuids": false, 00:20:36.113 "transport_tos": 0, 00:20:36.113 "nvme_error_stat": false, 00:20:36.113 "rdma_srq_size": 0, 00:20:36.113 "io_path_stat": false, 00:20:36.113 "allow_accel_sequence": false, 00:20:36.113 "rdma_max_cq_size": 0, 00:20:36.113 "rdma_cm_event_timeout_ms": 0, 00:20:36.113 "dhchap_digests": [ 00:20:36.113 "sha256", 00:20:36.113 "sha384", 00:20:36.113 "sha512" 00:20:36.113 ], 00:20:36.113 "dhchap_dhgroups": [ 00:20:36.113 "null", 00:20:36.113 "ffdhe2048", 00:20:36.113 "ffdhe3072", 00:20:36.113 "ffdhe4096", 00:20:36.113 "ffdhe6144", 00:20:36.113 "ffdhe8192" 00:20:36.113 ] 00:20:36.113 } 00:20:36.113 }, 00:20:36.113 { 00:20:36.113 "method": "bdev_nvme_set_hotplug", 00:20:36.113 "params": { 00:20:36.113 "period_us": 100000, 00:20:36.113 "enable": false 00:20:36.113 } 00:20:36.113 }, 00:20:36.113 { 00:20:36.113 "method": "bdev_malloc_create", 00:20:36.113 "params": { 00:20:36.114 "name": "malloc0", 00:20:36.114 "num_blocks": 8192, 00:20:36.114 "block_size": 4096, 00:20:36.114 "physical_block_size": 4096, 00:20:36.114 "uuid": "20593a4d-45cd-4751-83ae-3f19a752057f", 00:20:36.114 "optimal_io_boundary": 0, 00:20:36.114 "md_size": 0, 00:20:36.114 "dif_type": 0, 00:20:36.114 "dif_is_head_of_md": false, 00:20:36.114 "dif_pi_format": 0 00:20:36.114 } 00:20:36.114 }, 00:20:36.114 { 00:20:36.114 "method": "bdev_wait_for_examine" 00:20:36.114 } 00:20:36.114 ] 00:20:36.114 }, 00:20:36.114 { 00:20:36.114 "subsystem": "nbd", 00:20:36.114 "config": [] 00:20:36.114 }, 00:20:36.114 { 00:20:36.114 "subsystem": "scheduler", 00:20:36.114 "config": [ 00:20:36.114 { 00:20:36.114 "method": "framework_set_scheduler", 00:20:36.114 "params": { 00:20:36.114 "name": "static" 00:20:36.114 } 00:20:36.114 } 00:20:36.114 ] 00:20:36.114 }, 00:20:36.114 { 00:20:36.114 "subsystem": "nvmf", 00:20:36.114 "config": [ 00:20:36.114 { 00:20:36.114 "method": "nvmf_set_config", 00:20:36.114 "params": { 00:20:36.114 "discovery_filter": "match_any", 00:20:36.114 "admin_cmd_passthru": { 00:20:36.114 "identify_ctrlr": false 00:20:36.114 } 00:20:36.114 } 00:20:36.114 }, 00:20:36.114 { 00:20:36.114 "method": "nvmf_set_max_subsystems", 00:20:36.114 "params": { 00:20:36.114 "max_subsystems": 1024 00:20:36.114 } 00:20:36.114 }, 00:20:36.114 { 00:20:36.114 "method": "nvmf_set_crdt", 00:20:36.114 "params": { 00:20:36.114 "crdt1": 0, 00:20:36.114 "crdt2": 0, 00:20:36.114 "crdt3": 0 00:20:36.114 } 00:20:36.114 }, 00:20:36.114 { 00:20:36.114 "method": "nvmf_create_transport", 00:20:36.114 "params": { 00:20:36.114 "trtype": "TCP", 00:20:36.114 "max_queue_depth": 128, 00:20:36.114 "max_io_qpairs_per_ctrlr": 127, 00:20:36.114 "in_capsule_data_size": 4096, 00:20:36.114 "max_io_size": 131072, 00:20:36.114 "io_unit_size": 131072, 00:20:36.114 "max_aq_depth": 128, 00:20:36.114 "num_shared_buffers": 511, 00:20:36.114 "buf_cache_size": 4294967295, 00:20:36.114 "dif_insert_or_strip": false, 00:20:36.114 "zcopy": false, 00:20:36.114 "c2h_success": false, 00:20:36.114 "sock_priority": 0, 00:20:36.114 "abort_timeout_sec": 1, 00:20:36.114 "ack_timeout": 0, 00:20:36.114 "data_wr_pool_size": 0 00:20:36.114 } 00:20:36.114 }, 00:20:36.114 { 00:20:36.114 "method": "nvmf_create_subsystem", 00:20:36.114 "params": { 00:20:36.114 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.114 "allow_any_host": false, 00:20:36.114 "serial_number": "SPDK00000000000001", 00:20:36.114 "model_number": "SPDK bdev Controller", 00:20:36.114 "max_namespaces": 10, 00:20:36.114 "min_cntlid": 1, 00:20:36.114 "max_cntlid": 65519, 00:20:36.114 "ana_reporting": false 00:20:36.114 } 00:20:36.114 }, 00:20:36.114 { 00:20:36.114 "method": "nvmf_subsystem_add_host", 00:20:36.114 "params": { 00:20:36.114 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.114 "host": "nqn.2016-06.io.spdk:host1", 00:20:36.114 "psk": "/tmp/tmp.LBIMIM1KRA" 00:20:36.114 } 00:20:36.114 }, 00:20:36.114 { 00:20:36.114 "method": "nvmf_subsystem_add_ns", 00:20:36.114 "params": { 00:20:36.114 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.114 "namespace": { 00:20:36.114 "nsid": 1, 00:20:36.114 "bdev_name": "malloc0", 00:20:36.114 "nguid": "20593A4D45CD475183AE3F19A752057F", 00:20:36.114 "uuid": "20593a4d-45cd-4751-83ae-3f19a752057f", 00:20:36.114 "no_auto_visible": false 00:20:36.114 } 00:20:36.114 } 00:20:36.114 }, 00:20:36.114 { 00:20:36.114 "method": "nvmf_subsystem_add_listener", 00:20:36.114 "params": { 00:20:36.114 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.114 "listen_address": { 00:20:36.114 "trtype": "TCP", 00:20:36.114 "adrfam": "IPv4", 00:20:36.114 "traddr": "10.0.0.2", 00:20:36.114 "trsvcid": "4420" 00:20:36.114 }, 00:20:36.114 "secure_channel": true 00:20:36.114 } 00:20:36.114 } 00:20:36.114 ] 00:20:36.114 } 00:20:36.114 ] 00:20:36.114 }' 00:20:36.114 20:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:36.376 20:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:20:36.377 "subsystems": [ 00:20:36.377 { 00:20:36.377 "subsystem": "keyring", 00:20:36.377 "config": [] 00:20:36.377 }, 00:20:36.377 { 00:20:36.377 "subsystem": "iobuf", 00:20:36.377 "config": [ 00:20:36.377 { 00:20:36.377 "method": "iobuf_set_options", 00:20:36.377 "params": { 00:20:36.377 "small_pool_count": 8192, 00:20:36.377 "large_pool_count": 1024, 00:20:36.377 "small_bufsize": 8192, 00:20:36.377 "large_bufsize": 135168 00:20:36.377 } 00:20:36.377 } 00:20:36.377 ] 00:20:36.377 }, 00:20:36.377 { 00:20:36.377 "subsystem": "sock", 00:20:36.377 "config": [ 00:20:36.377 { 00:20:36.377 "method": "sock_set_default_impl", 00:20:36.377 "params": { 00:20:36.377 "impl_name": "posix" 00:20:36.377 } 00:20:36.377 }, 00:20:36.377 { 00:20:36.377 "method": "sock_impl_set_options", 00:20:36.377 "params": { 00:20:36.377 "impl_name": "ssl", 00:20:36.377 "recv_buf_size": 4096, 00:20:36.377 "send_buf_size": 4096, 00:20:36.377 "enable_recv_pipe": true, 00:20:36.377 "enable_quickack": false, 00:20:36.377 "enable_placement_id": 0, 00:20:36.377 "enable_zerocopy_send_server": true, 00:20:36.377 "enable_zerocopy_send_client": false, 00:20:36.377 "zerocopy_threshold": 0, 00:20:36.377 "tls_version": 0, 00:20:36.377 "enable_ktls": false 00:20:36.377 } 00:20:36.377 }, 00:20:36.377 { 00:20:36.377 "method": "sock_impl_set_options", 00:20:36.377 "params": { 00:20:36.377 "impl_name": "posix", 00:20:36.377 "recv_buf_size": 2097152, 00:20:36.377 "send_buf_size": 2097152, 00:20:36.377 "enable_recv_pipe": true, 00:20:36.377 "enable_quickack": false, 00:20:36.377 "enable_placement_id": 0, 00:20:36.377 "enable_zerocopy_send_server": true, 00:20:36.377 "enable_zerocopy_send_client": false, 00:20:36.377 "zerocopy_threshold": 0, 00:20:36.377 "tls_version": 0, 00:20:36.377 "enable_ktls": false 00:20:36.377 } 00:20:36.377 } 00:20:36.377 ] 00:20:36.377 }, 00:20:36.377 { 00:20:36.377 "subsystem": "vmd", 00:20:36.377 "config": [] 00:20:36.377 }, 00:20:36.377 { 00:20:36.377 "subsystem": "accel", 00:20:36.377 "config": [ 00:20:36.377 { 00:20:36.377 "method": "accel_set_options", 00:20:36.377 "params": { 00:20:36.377 "small_cache_size": 128, 00:20:36.377 "large_cache_size": 16, 00:20:36.377 "task_count": 2048, 00:20:36.377 "sequence_count": 2048, 00:20:36.377 "buf_count": 2048 00:20:36.377 } 00:20:36.377 } 00:20:36.377 ] 00:20:36.377 }, 00:20:36.377 { 00:20:36.377 "subsystem": "bdev", 00:20:36.377 "config": [ 00:20:36.377 { 00:20:36.377 "method": "bdev_set_options", 00:20:36.377 "params": { 00:20:36.377 "bdev_io_pool_size": 65535, 00:20:36.377 "bdev_io_cache_size": 256, 00:20:36.377 "bdev_auto_examine": true, 00:20:36.377 "iobuf_small_cache_size": 128, 00:20:36.377 "iobuf_large_cache_size": 16 00:20:36.377 } 00:20:36.377 }, 00:20:36.377 { 00:20:36.377 "method": "bdev_raid_set_options", 00:20:36.377 "params": { 00:20:36.377 "process_window_size_kb": 1024, 00:20:36.377 "process_max_bandwidth_mb_sec": 0 00:20:36.377 } 00:20:36.377 }, 00:20:36.377 { 00:20:36.377 "method": "bdev_iscsi_set_options", 00:20:36.377 "params": { 00:20:36.377 "timeout_sec": 30 00:20:36.377 } 00:20:36.377 }, 00:20:36.377 { 00:20:36.377 "method": "bdev_nvme_set_options", 00:20:36.377 "params": { 00:20:36.377 "action_on_timeout": "none", 00:20:36.377 "timeout_us": 0, 00:20:36.377 "timeout_admin_us": 0, 00:20:36.377 "keep_alive_timeout_ms": 10000, 00:20:36.377 "arbitration_burst": 0, 00:20:36.377 "low_priority_weight": 0, 00:20:36.377 "medium_priority_weight": 0, 00:20:36.377 "high_priority_weight": 0, 00:20:36.377 "nvme_adminq_poll_period_us": 10000, 00:20:36.377 "nvme_ioq_poll_period_us": 0, 00:20:36.377 "io_queue_requests": 512, 00:20:36.377 "delay_cmd_submit": true, 00:20:36.377 "transport_retry_count": 4, 00:20:36.377 "bdev_retry_count": 3, 00:20:36.377 "transport_ack_timeout": 0, 00:20:36.377 "ctrlr_loss_timeout_sec": 0, 00:20:36.377 "reconnect_delay_sec": 0, 00:20:36.377 "fast_io_fail_timeout_sec": 0, 00:20:36.377 "disable_auto_failback": false, 00:20:36.377 "generate_uuids": false, 00:20:36.377 "transport_tos": 0, 00:20:36.377 "nvme_error_stat": false, 00:20:36.377 "rdma_srq_size": 0, 00:20:36.377 "io_path_stat": false, 00:20:36.377 "allow_accel_sequence": false, 00:20:36.377 "rdma_max_cq_size": 0, 00:20:36.377 "rdma_cm_event_timeout_ms": 0, 00:20:36.377 "dhchap_digests": [ 00:20:36.377 "sha256", 00:20:36.377 "sha384", 00:20:36.377 "sha512" 00:20:36.377 ], 00:20:36.377 "dhchap_dhgroups": [ 00:20:36.377 "null", 00:20:36.377 "ffdhe2048", 00:20:36.377 "ffdhe3072", 00:20:36.377 "ffdhe4096", 00:20:36.377 "ffdhe6144", 00:20:36.377 "ffdhe8192" 00:20:36.377 ] 00:20:36.377 } 00:20:36.377 }, 00:20:36.377 { 00:20:36.377 "method": "bdev_nvme_attach_controller", 00:20:36.377 "params": { 00:20:36.377 "name": "TLSTEST", 00:20:36.377 "trtype": "TCP", 00:20:36.377 "adrfam": "IPv4", 00:20:36.377 "traddr": "10.0.0.2", 00:20:36.377 "trsvcid": "4420", 00:20:36.377 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.377 "prchk_reftag": false, 00:20:36.377 "prchk_guard": false, 00:20:36.377 "ctrlr_loss_timeout_sec": 0, 00:20:36.377 "reconnect_delay_sec": 0, 00:20:36.377 "fast_io_fail_timeout_sec": 0, 00:20:36.377 "psk": "/tmp/tmp.LBIMIM1KRA", 00:20:36.377 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:36.377 "hdgst": false, 00:20:36.377 "ddgst": false 00:20:36.377 } 00:20:36.377 }, 00:20:36.377 { 00:20:36.377 "method": "bdev_nvme_set_hotplug", 00:20:36.377 "params": { 00:20:36.377 "period_us": 100000, 00:20:36.377 "enable": false 00:20:36.377 } 00:20:36.377 }, 00:20:36.377 { 00:20:36.377 "method": "bdev_wait_for_examine" 00:20:36.377 } 00:20:36.377 ] 00:20:36.377 }, 00:20:36.377 { 00:20:36.377 "subsystem": "nbd", 00:20:36.377 "config": [] 00:20:36.377 } 00:20:36.377 ] 00:20:36.377 }' 00:20:36.377 20:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 3711926 00:20:36.377 20:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3711926 ']' 00:20:36.377 20:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3711926 00:20:36.377 20:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:36.377 20:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:36.377 20:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3711926 00:20:36.377 20:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:36.377 20:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:36.378 20:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3711926' 00:20:36.378 killing process with pid 3711926 00:20:36.378 20:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3711926 00:20:36.378 Received shutdown signal, test time was about 10.000000 seconds 00:20:36.378 00:20:36.378 Latency(us) 00:20:36.378 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.378 =================================================================================================================== 00:20:36.378 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:36.378 [2024-07-24 20:00:24.302623] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:36.378 20:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3711926 00:20:36.639 20:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 3711570 00:20:36.639 20:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3711570 ']' 00:20:36.639 20:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3711570 00:20:36.639 20:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:36.639 20:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:36.639 20:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3711570 00:20:36.639 20:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:36.639 20:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:36.639 20:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3711570' 00:20:36.639 killing process with pid 3711570 00:20:36.639 20:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3711570 00:20:36.639 [2024-07-24 20:00:24.468261] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:36.639 20:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3711570 00:20:36.639 20:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:36.639 20:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:36.639 20:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:36.639 20:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.639 20:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:20:36.639 "subsystems": [ 00:20:36.639 { 00:20:36.639 "subsystem": "keyring", 00:20:36.639 "config": [] 00:20:36.639 }, 00:20:36.639 { 00:20:36.639 "subsystem": "iobuf", 00:20:36.639 "config": [ 00:20:36.639 { 00:20:36.639 "method": "iobuf_set_options", 00:20:36.639 "params": { 00:20:36.639 "small_pool_count": 8192, 00:20:36.639 "large_pool_count": 1024, 00:20:36.639 "small_bufsize": 8192, 00:20:36.639 "large_bufsize": 135168 00:20:36.639 } 00:20:36.639 } 00:20:36.639 ] 00:20:36.639 }, 00:20:36.639 { 00:20:36.639 "subsystem": "sock", 00:20:36.639 "config": [ 00:20:36.639 { 00:20:36.639 "method": "sock_set_default_impl", 00:20:36.639 "params": { 00:20:36.639 "impl_name": "posix" 00:20:36.639 } 00:20:36.639 }, 00:20:36.639 { 00:20:36.639 "method": "sock_impl_set_options", 00:20:36.639 "params": { 00:20:36.639 "impl_name": "ssl", 00:20:36.639 "recv_buf_size": 4096, 00:20:36.639 "send_buf_size": 4096, 00:20:36.639 "enable_recv_pipe": true, 00:20:36.639 "enable_quickack": false, 00:20:36.639 "enable_placement_id": 0, 00:20:36.639 "enable_zerocopy_send_server": true, 00:20:36.639 "enable_zerocopy_send_client": false, 00:20:36.639 "zerocopy_threshold": 0, 00:20:36.639 "tls_version": 0, 00:20:36.639 "enable_ktls": false 00:20:36.639 } 00:20:36.639 }, 00:20:36.639 { 00:20:36.639 "method": "sock_impl_set_options", 00:20:36.639 "params": { 00:20:36.639 "impl_name": "posix", 00:20:36.639 "recv_buf_size": 2097152, 00:20:36.639 "send_buf_size": 2097152, 00:20:36.639 "enable_recv_pipe": true, 00:20:36.639 "enable_quickack": false, 00:20:36.639 "enable_placement_id": 0, 00:20:36.639 "enable_zerocopy_send_server": true, 00:20:36.639 "enable_zerocopy_send_client": false, 00:20:36.639 "zerocopy_threshold": 0, 00:20:36.639 "tls_version": 0, 00:20:36.639 "enable_ktls": false 00:20:36.639 } 00:20:36.639 } 00:20:36.639 ] 00:20:36.639 }, 00:20:36.639 { 00:20:36.639 "subsystem": "vmd", 00:20:36.639 "config": [] 00:20:36.639 }, 00:20:36.639 { 00:20:36.639 "subsystem": "accel", 00:20:36.639 "config": [ 00:20:36.639 { 00:20:36.639 "method": "accel_set_options", 00:20:36.639 "params": { 00:20:36.639 "small_cache_size": 128, 00:20:36.639 "large_cache_size": 16, 00:20:36.639 "task_count": 2048, 00:20:36.639 "sequence_count": 2048, 00:20:36.639 "buf_count": 2048 00:20:36.639 } 00:20:36.639 } 00:20:36.639 ] 00:20:36.639 }, 00:20:36.639 { 00:20:36.639 "subsystem": "bdev", 00:20:36.639 "config": [ 00:20:36.639 { 00:20:36.640 "method": "bdev_set_options", 00:20:36.640 "params": { 00:20:36.640 "bdev_io_pool_size": 65535, 00:20:36.640 "bdev_io_cache_size": 256, 00:20:36.640 "bdev_auto_examine": true, 00:20:36.640 "iobuf_small_cache_size": 128, 00:20:36.640 "iobuf_large_cache_size": 16 00:20:36.640 } 00:20:36.640 }, 00:20:36.640 { 00:20:36.640 "method": "bdev_raid_set_options", 00:20:36.640 "params": { 00:20:36.640 "process_window_size_kb": 1024, 00:20:36.640 "process_max_bandwidth_mb_sec": 0 00:20:36.640 } 00:20:36.640 }, 00:20:36.640 { 00:20:36.640 "method": "bdev_iscsi_set_options", 00:20:36.640 "params": { 00:20:36.640 "timeout_sec": 30 00:20:36.640 } 00:20:36.640 }, 00:20:36.640 { 00:20:36.640 "method": "bdev_nvme_set_options", 00:20:36.640 "params": { 00:20:36.640 "action_on_timeout": "none", 00:20:36.640 "timeout_us": 0, 00:20:36.640 "timeout_admin_us": 0, 00:20:36.640 "keep_alive_timeout_ms": 10000, 00:20:36.640 "arbitration_burst": 0, 00:20:36.640 "low_priority_weight": 0, 00:20:36.640 "medium_priority_weight": 0, 00:20:36.640 "high_priority_weight": 0, 00:20:36.640 "nvme_adminq_poll_period_us": 10000, 00:20:36.640 "nvme_ioq_poll_period_us": 0, 00:20:36.640 "io_queue_requests": 0, 00:20:36.640 "delay_cmd_submit": true, 00:20:36.640 "transport_retry_count": 4, 00:20:36.640 "bdev_retry_count": 3, 00:20:36.640 "transport_ack_timeout": 0, 00:20:36.640 "ctrlr_loss_timeout_sec": 0, 00:20:36.640 "reconnect_delay_sec": 0, 00:20:36.640 "fast_io_fail_timeout_sec": 0, 00:20:36.640 "disable_auto_failback": false, 00:20:36.640 "generate_uuids": false, 00:20:36.640 "transport_tos": 0, 00:20:36.640 "nvme_error_stat": false, 00:20:36.640 "rdma_srq_size": 0, 00:20:36.640 "io_path_stat": false, 00:20:36.640 "allow_accel_sequence": false, 00:20:36.640 "rdma_max_cq_size": 0, 00:20:36.640 "rdma_cm_event_timeout_ms": 0, 00:20:36.640 "dhchap_digests": [ 00:20:36.640 "sha256", 00:20:36.640 "sha384", 00:20:36.640 "sha512" 00:20:36.640 ], 00:20:36.640 "dhchap_dhgroups": [ 00:20:36.640 "null", 00:20:36.640 "ffdhe2048", 00:20:36.640 "ffdhe3072", 00:20:36.640 "ffdhe4096", 00:20:36.640 "ffdhe6144", 00:20:36.640 "ffdhe8192" 00:20:36.640 ] 00:20:36.640 } 00:20:36.640 }, 00:20:36.640 { 00:20:36.640 "method": "bdev_nvme_set_hotplug", 00:20:36.640 "params": { 00:20:36.640 "period_us": 100000, 00:20:36.640 "enable": false 00:20:36.640 } 00:20:36.640 }, 00:20:36.640 { 00:20:36.640 "method": "bdev_malloc_create", 00:20:36.640 "params": { 00:20:36.640 "name": "malloc0", 00:20:36.640 "num_blocks": 8192, 00:20:36.640 "block_size": 4096, 00:20:36.640 "physical_block_size": 4096, 00:20:36.640 "uuid": "20593a4d-45cd-4751-83ae-3f19a752057f", 00:20:36.640 "optimal_io_boundary": 0, 00:20:36.640 "md_size": 0, 00:20:36.640 "dif_type": 0, 00:20:36.640 "dif_is_head_of_md": false, 00:20:36.640 "dif_pi_format": 0 00:20:36.640 } 00:20:36.640 }, 00:20:36.640 { 00:20:36.640 "method": "bdev_wait_for_examine" 00:20:36.640 } 00:20:36.640 ] 00:20:36.640 }, 00:20:36.640 { 00:20:36.640 "subsystem": "nbd", 00:20:36.640 "config": [] 00:20:36.640 }, 00:20:36.640 { 00:20:36.640 "subsystem": "scheduler", 00:20:36.640 "config": [ 00:20:36.640 { 00:20:36.640 "method": "framework_set_scheduler", 00:20:36.640 "params": { 00:20:36.640 "name": "static" 00:20:36.640 } 00:20:36.640 } 00:20:36.640 ] 00:20:36.640 }, 00:20:36.640 { 00:20:36.640 "subsystem": "nvmf", 00:20:36.640 "config": [ 00:20:36.640 { 00:20:36.640 "method": "nvmf_set_config", 00:20:36.640 "params": { 00:20:36.640 "discovery_filter": "match_any", 00:20:36.640 "admin_cmd_passthru": { 00:20:36.640 "identify_ctrlr": false 00:20:36.640 } 00:20:36.640 } 00:20:36.640 }, 00:20:36.640 { 00:20:36.640 "method": "nvmf_set_max_subsystems", 00:20:36.640 "params": { 00:20:36.640 "max_subsystems": 1024 00:20:36.640 } 00:20:36.640 }, 00:20:36.640 { 00:20:36.640 "method": "nvmf_set_crdt", 00:20:36.640 "params": { 00:20:36.640 "crdt1": 0, 00:20:36.640 "crdt2": 0, 00:20:36.640 "crdt3": 0 00:20:36.640 } 00:20:36.640 }, 00:20:36.640 { 00:20:36.640 "method": "nvmf_create_transport", 00:20:36.640 "params": { 00:20:36.640 "trtype": "TCP", 00:20:36.640 "max_queue_depth": 128, 00:20:36.640 "max_io_qpairs_per_ctrlr": 127, 00:20:36.640 "in_capsule_data_size": 4096, 00:20:36.640 "max_io_size": 131072, 00:20:36.640 "io_unit_size": 131072, 00:20:36.640 "max_aq_depth": 128, 00:20:36.640 "num_shared_buffers": 511, 00:20:36.640 "buf_cache_size": 4294967295, 00:20:36.640 "dif_insert_or_strip": false, 00:20:36.640 "zcopy": false, 00:20:36.640 "c2h_success": false, 00:20:36.640 "sock_priority": 0, 00:20:36.640 "abort_timeout_sec": 1, 00:20:36.640 "ack_timeout": 0, 00:20:36.640 "data_wr_pool_size": 0 00:20:36.640 } 00:20:36.640 }, 00:20:36.640 { 00:20:36.640 "method": "nvmf_create_subsystem", 00:20:36.640 "params": { 00:20:36.640 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.640 "allow_any_host": false, 00:20:36.640 "serial_number": "SPDK00000000000001", 00:20:36.640 "model_number": "SPDK bdev Controller", 00:20:36.640 "max_namespaces": 10, 00:20:36.640 "min_cntlid": 1, 00:20:36.640 "max_cntlid": 65519, 00:20:36.640 "ana_reporting": false 00:20:36.640 } 00:20:36.640 }, 00:20:36.640 { 00:20:36.640 "method": "nvmf_subsystem_add_host", 00:20:36.640 "params": { 00:20:36.640 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.640 "host": "nqn.2016-06.io.spdk:host1", 00:20:36.640 "psk": "/tmp/tmp.LBIMIM1KRA" 00:20:36.640 } 00:20:36.640 }, 00:20:36.640 { 00:20:36.640 "method": "nvmf_subsystem_add_ns", 00:20:36.640 "params": { 00:20:36.640 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.640 "namespace": { 00:20:36.640 "nsid": 1, 00:20:36.640 "bdev_name": "malloc0", 00:20:36.640 "nguid": "20593A4D45CD475183AE3F19A752057F", 00:20:36.640 "uuid": "20593a4d-45cd-4751-83ae-3f19a752057f", 00:20:36.640 "no_auto_visible": false 00:20:36.640 } 00:20:36.640 } 00:20:36.640 }, 00:20:36.640 { 00:20:36.640 "method": "nvmf_subsystem_add_listener", 00:20:36.640 "params": { 00:20:36.640 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.640 "listen_address": { 00:20:36.640 "trtype": "TCP", 00:20:36.640 "adrfam": "IPv4", 00:20:36.640 "traddr": "10.0.0.2", 00:20:36.640 "trsvcid": "4420" 00:20:36.640 }, 00:20:36.640 "secure_channel": true 00:20:36.640 } 00:20:36.640 } 00:20:36.640 ] 00:20:36.640 } 00:20:36.640 ] 00:20:36.640 }' 00:20:36.901 20:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3712285 00:20:36.901 20:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3712285 00:20:36.901 20:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:36.901 20:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3712285 ']' 00:20:36.901 20:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.901 20:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:36.901 20:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.901 20:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:36.901 20:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.901 [2024-07-24 20:00:24.656041] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:20:36.901 [2024-07-24 20:00:24.656120] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:36.901 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.901 [2024-07-24 20:00:24.738307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.901 [2024-07-24 20:00:24.791543] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:36.902 [2024-07-24 20:00:24.791576] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:36.902 [2024-07-24 20:00:24.791581] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:36.902 [2024-07-24 20:00:24.791586] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:36.902 [2024-07-24 20:00:24.791589] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:36.902 [2024-07-24 20:00:24.791630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:37.163 [2024-07-24 20:00:24.974613] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:37.163 [2024-07-24 20:00:25.001470] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:37.163 [2024-07-24 20:00:25.017496] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:37.163 [2024-07-24 20:00:25.017667] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:37.735 20:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:37.735 20:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:37.735 20:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:37.735 20:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:37.735 20:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.735 20:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:37.735 20:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3712553 00:20:37.735 20:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3712553 /var/tmp/bdevperf.sock 00:20:37.735 20:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3712553 ']' 00:20:37.735 20:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:37.735 20:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:37.735 20:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:37.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:37.735 20:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:37.735 20:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:37.735 20:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.735 20:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:20:37.735 "subsystems": [ 00:20:37.735 { 00:20:37.735 "subsystem": "keyring", 00:20:37.735 "config": [] 00:20:37.735 }, 00:20:37.735 { 00:20:37.735 "subsystem": "iobuf", 00:20:37.735 "config": [ 00:20:37.735 { 00:20:37.735 "method": "iobuf_set_options", 00:20:37.735 "params": { 00:20:37.735 "small_pool_count": 8192, 00:20:37.735 "large_pool_count": 1024, 00:20:37.735 "small_bufsize": 8192, 00:20:37.735 "large_bufsize": 135168 00:20:37.735 } 00:20:37.735 } 00:20:37.735 ] 00:20:37.735 }, 00:20:37.735 { 00:20:37.735 "subsystem": "sock", 00:20:37.735 "config": [ 00:20:37.735 { 00:20:37.735 "method": "sock_set_default_impl", 00:20:37.735 "params": { 00:20:37.735 "impl_name": "posix" 00:20:37.735 } 00:20:37.735 }, 00:20:37.735 { 00:20:37.735 "method": "sock_impl_set_options", 00:20:37.735 "params": { 00:20:37.735 "impl_name": "ssl", 00:20:37.735 "recv_buf_size": 4096, 00:20:37.735 "send_buf_size": 4096, 00:20:37.735 "enable_recv_pipe": true, 00:20:37.735 "enable_quickack": false, 00:20:37.735 "enable_placement_id": 0, 00:20:37.735 "enable_zerocopy_send_server": true, 00:20:37.735 "enable_zerocopy_send_client": false, 00:20:37.735 "zerocopy_threshold": 0, 00:20:37.735 "tls_version": 0, 00:20:37.735 "enable_ktls": false 00:20:37.735 } 00:20:37.735 }, 00:20:37.735 { 00:20:37.735 "method": "sock_impl_set_options", 00:20:37.735 "params": { 00:20:37.735 "impl_name": "posix", 00:20:37.735 "recv_buf_size": 2097152, 00:20:37.735 "send_buf_size": 2097152, 00:20:37.735 "enable_recv_pipe": true, 00:20:37.735 "enable_quickack": false, 00:20:37.735 "enable_placement_id": 0, 00:20:37.735 "enable_zerocopy_send_server": true, 00:20:37.735 "enable_zerocopy_send_client": false, 00:20:37.735 "zerocopy_threshold": 0, 00:20:37.735 "tls_version": 0, 00:20:37.735 "enable_ktls": false 00:20:37.735 } 00:20:37.735 } 00:20:37.735 ] 00:20:37.735 }, 00:20:37.735 { 00:20:37.735 "subsystem": "vmd", 00:20:37.735 "config": [] 00:20:37.735 }, 00:20:37.735 { 00:20:37.735 "subsystem": "accel", 00:20:37.735 "config": [ 00:20:37.735 { 00:20:37.735 "method": "accel_set_options", 00:20:37.735 "params": { 00:20:37.735 "small_cache_size": 128, 00:20:37.735 "large_cache_size": 16, 00:20:37.735 "task_count": 2048, 00:20:37.735 "sequence_count": 2048, 00:20:37.735 "buf_count": 2048 00:20:37.735 } 00:20:37.735 } 00:20:37.735 ] 00:20:37.735 }, 00:20:37.735 { 00:20:37.735 "subsystem": "bdev", 00:20:37.735 "config": [ 00:20:37.735 { 00:20:37.735 "method": "bdev_set_options", 00:20:37.735 "params": { 00:20:37.735 "bdev_io_pool_size": 65535, 00:20:37.735 "bdev_io_cache_size": 256, 00:20:37.735 "bdev_auto_examine": true, 00:20:37.735 "iobuf_small_cache_size": 128, 00:20:37.735 "iobuf_large_cache_size": 16 00:20:37.735 } 00:20:37.735 }, 00:20:37.735 { 00:20:37.735 "method": "bdev_raid_set_options", 00:20:37.735 "params": { 00:20:37.735 "process_window_size_kb": 1024, 00:20:37.735 "process_max_bandwidth_mb_sec": 0 00:20:37.735 } 00:20:37.735 }, 00:20:37.735 { 00:20:37.735 "method": "bdev_iscsi_set_options", 00:20:37.735 "params": { 00:20:37.735 "timeout_sec": 30 00:20:37.735 } 00:20:37.735 }, 00:20:37.735 { 00:20:37.735 "method": "bdev_nvme_set_options", 00:20:37.735 "params": { 00:20:37.735 "action_on_timeout": "none", 00:20:37.735 "timeout_us": 0, 00:20:37.735 "timeout_admin_us": 0, 00:20:37.735 "keep_alive_timeout_ms": 10000, 00:20:37.735 "arbitration_burst": 0, 00:20:37.735 "low_priority_weight": 0, 00:20:37.735 "medium_priority_weight": 0, 00:20:37.735 "high_priority_weight": 0, 00:20:37.735 "nvme_adminq_poll_period_us": 10000, 00:20:37.735 "nvme_ioq_poll_period_us": 0, 00:20:37.735 "io_queue_requests": 512, 00:20:37.735 "delay_cmd_submit": true, 00:20:37.735 "transport_retry_count": 4, 00:20:37.735 "bdev_retry_count": 3, 00:20:37.735 "transport_ack_timeout": 0, 00:20:37.736 "ctrlr_loss_timeout_sec": 0, 00:20:37.736 "reconnect_delay_sec": 0, 00:20:37.736 "fast_io_fail_timeout_sec": 0, 00:20:37.736 "disable_auto_failback": false, 00:20:37.736 "generate_uuids": false, 00:20:37.736 "transport_tos": 0, 00:20:37.736 "nvme_error_stat": false, 00:20:37.736 "rdma_srq_size": 0, 00:20:37.736 "io_path_stat": false, 00:20:37.736 "allow_accel_sequence": false, 00:20:37.736 "rdma_max_cq_size": 0, 00:20:37.736 "rdma_cm_event_timeout_ms": 0, 00:20:37.736 "dhchap_digests": [ 00:20:37.736 "sha256", 00:20:37.736 "sha384", 00:20:37.736 "sha512" 00:20:37.736 ], 00:20:37.736 "dhchap_dhgroups": [ 00:20:37.736 "null", 00:20:37.736 "ffdhe2048", 00:20:37.736 "ffdhe3072", 00:20:37.736 "ffdhe4096", 00:20:37.736 "ffdhe6144", 00:20:37.736 "ffdhe8192" 00:20:37.736 ] 00:20:37.736 } 00:20:37.736 }, 00:20:37.736 { 00:20:37.736 "method": "bdev_nvme_attach_controller", 00:20:37.736 "params": { 00:20:37.736 "name": "TLSTEST", 00:20:37.736 "trtype": "TCP", 00:20:37.736 "adrfam": "IPv4", 00:20:37.736 "traddr": "10.0.0.2", 00:20:37.736 "trsvcid": "4420", 00:20:37.736 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.736 "prchk_reftag": false, 00:20:37.736 "prchk_guard": false, 00:20:37.736 "ctrlr_loss_timeout_sec": 0, 00:20:37.736 "reconnect_delay_sec": 0, 00:20:37.736 "fast_io_fail_timeout_sec": 0, 00:20:37.736 "psk": "/tmp/tmp.LBIMIM1KRA", 00:20:37.736 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:37.736 "hdgst": false, 00:20:37.736 "ddgst": false 00:20:37.736 } 00:20:37.736 }, 00:20:37.736 { 00:20:37.736 "method": "bdev_nvme_set_hotplug", 00:20:37.736 "params": { 00:20:37.736 "period_us": 100000, 00:20:37.736 "enable": false 00:20:37.736 } 00:20:37.736 }, 00:20:37.736 { 00:20:37.736 "method": "bdev_wait_for_examine" 00:20:37.736 } 00:20:37.736 ] 00:20:37.736 }, 00:20:37.736 { 00:20:37.736 "subsystem": "nbd", 00:20:37.736 "config": [] 00:20:37.736 } 00:20:37.736 ] 00:20:37.736 }' 00:20:37.736 [2024-07-24 20:00:25.498070] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:20:37.736 [2024-07-24 20:00:25.498122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3712553 ] 00:20:37.736 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.736 [2024-07-24 20:00:25.547155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.736 [2024-07-24 20:00:25.599892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:37.996 [2024-07-24 20:00:25.724590] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:37.996 [2024-07-24 20:00:25.724660] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:38.567 20:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:38.567 20:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:38.567 20:00:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:38.567 Running I/O for 10 seconds... 00:20:48.571 00:20:48.571 Latency(us) 00:20:48.571 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.571 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:48.571 Verification LBA range: start 0x0 length 0x2000 00:20:48.571 TLSTESTn1 : 10.07 2238.68 8.74 0.00 0.00 56986.54 5789.01 148548.27 00:20:48.571 =================================================================================================================== 00:20:48.571 Total : 2238.68 8.74 0.00 0.00 56986.54 5789.01 148548.27 00:20:48.571 0 00:20:48.571 20:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:48.571 20:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 3712553 00:20:48.571 20:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3712553 ']' 00:20:48.571 20:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3712553 00:20:48.571 20:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:48.571 20:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:48.571 20:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3712553 00:20:48.571 20:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:48.571 20:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:48.571 20:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3712553' 00:20:48.571 killing process with pid 3712553 00:20:48.571 20:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3712553 00:20:48.571 Received shutdown signal, test time was about 10.000000 seconds 00:20:48.571 00:20:48.571 Latency(us) 00:20:48.571 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.571 =================================================================================================================== 00:20:48.571 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:48.571 [2024-07-24 20:00:36.490293] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:48.571 20:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3712553 00:20:48.832 20:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 3712285 00:20:48.832 20:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3712285 ']' 00:20:48.832 20:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3712285 00:20:48.832 20:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:48.832 20:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:48.832 20:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3712285 00:20:48.832 20:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:48.832 20:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:48.832 20:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3712285' 00:20:48.832 killing process with pid 3712285 00:20:48.832 20:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3712285 00:20:48.832 [2024-07-24 20:00:36.654685] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:48.832 20:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3712285 00:20:48.832 20:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:20:48.832 20:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:48.832 20:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:48.832 20:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:48.832 20:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3714653 00:20:48.832 20:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3714653 00:20:48.832 20:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:48.832 20:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3714653 ']' 00:20:48.832 20:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.832 20:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:48.832 20:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.832 20:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:48.832 20:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.092 [2024-07-24 20:00:36.830561] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:20:49.092 [2024-07-24 20:00:36.830611] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.092 EAL: No free 2048 kB hugepages reported on node 1 00:20:49.092 [2024-07-24 20:00:36.898397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.092 [2024-07-24 20:00:36.961229] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:49.092 [2024-07-24 20:00:36.961269] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:49.092 [2024-07-24 20:00:36.961276] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:49.092 [2024-07-24 20:00:36.961283] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:49.092 [2024-07-24 20:00:36.961288] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:49.092 [2024-07-24 20:00:36.961316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.662 20:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:49.662 20:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:49.923 20:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:49.923 20:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:49.923 20:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.923 20:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:49.923 20:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.LBIMIM1KRA 00:20:49.923 20:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.LBIMIM1KRA 00:20:49.923 20:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:49.923 [2024-07-24 20:00:37.800234] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:49.923 20:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:50.219 20:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:50.219 [2024-07-24 20:00:38.133055] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:50.219 [2024-07-24 20:00:38.133263] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:50.219 20:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:50.481 malloc0 00:20:50.481 20:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:50.742 20:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LBIMIM1KRA 00:20:50.742 [2024-07-24 20:00:38.620916] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:50.742 20:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:50.742 20:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3715018 00:20:50.742 20:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:50.742 20:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3715018 /var/tmp/bdevperf.sock 00:20:50.742 20:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3715018 ']' 00:20:50.742 20:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:50.742 20:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:50.742 20:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:50.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:50.742 20:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:50.742 20:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:50.742 [2024-07-24 20:00:38.687856] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:20:50.742 [2024-07-24 20:00:38.687921] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3715018 ] 00:20:51.003 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.003 [2024-07-24 20:00:38.768886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.003 [2024-07-24 20:00:38.822545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:51.576 20:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:51.576 20:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:51.576 20:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LBIMIM1KRA 00:20:51.836 20:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:51.836 [2024-07-24 20:00:39.744798] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:52.097 nvme0n1 00:20:52.097 20:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:52.097 Running I/O for 1 seconds... 00:20:53.047 00:20:53.047 Latency(us) 00:20:53.047 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.047 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:53.047 Verification LBA range: start 0x0 length 0x2000 00:20:53.047 nvme0n1 : 1.07 1794.34 7.01 0.00 0.00 69398.69 5816.32 138936.32 00:20:53.047 =================================================================================================================== 00:20:53.047 Total : 1794.34 7.01 0.00 0.00 69398.69 5816.32 138936.32 00:20:53.047 0 00:20:53.311 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 3715018 00:20:53.311 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3715018 ']' 00:20:53.311 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3715018 00:20:53.311 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:53.311 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:53.311 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3715018 00:20:53.311 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:53.311 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:53.311 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3715018' 00:20:53.311 killing process with pid 3715018 00:20:53.311 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3715018 00:20:53.311 Received shutdown signal, test time was about 1.000000 seconds 00:20:53.311 00:20:53.311 Latency(us) 00:20:53.311 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.311 =================================================================================================================== 00:20:53.311 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:53.311 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3715018 00:20:53.573 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 3714653 00:20:53.573 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3714653 ']' 00:20:53.573 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3714653 00:20:53.573 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:53.573 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:53.573 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3714653 00:20:53.573 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:53.573 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:53.573 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3714653' 00:20:53.573 killing process with pid 3714653 00:20:53.573 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3714653 00:20:53.573 [2024-07-24 20:00:41.321483] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:53.573 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3714653 00:20:53.573 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:20:53.573 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:53.573 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:53.573 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:53.573 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3715696 00:20:53.573 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3715696 00:20:53.573 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:53.574 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3715696 ']' 00:20:53.574 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.574 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:53.574 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.574 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:53.574 20:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:53.574 [2024-07-24 20:00:41.520260] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:20:53.574 [2024-07-24 20:00:41.520310] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.837 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.837 [2024-07-24 20:00:41.587429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.837 [2024-07-24 20:00:41.650438] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:53.837 [2024-07-24 20:00:41.650479] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:53.837 [2024-07-24 20:00:41.650487] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:53.837 [2024-07-24 20:00:41.650494] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:53.837 [2024-07-24 20:00:41.650500] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:53.837 [2024-07-24 20:00:41.650527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.409 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:54.409 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:54.409 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:54.409 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:54.409 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.409 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:54.409 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:20:54.409 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.409 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.409 [2024-07-24 20:00:42.349033] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:54.671 malloc0 00:20:54.671 [2024-07-24 20:00:42.375890] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:54.671 [2024-07-24 20:00:42.391504] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:54.671 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.671 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=3715862 00:20:54.671 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 3715862 /var/tmp/bdevperf.sock 00:20:54.671 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:54.671 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3715862 ']' 00:20:54.671 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:54.671 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:54.671 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:54.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:54.671 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:54.671 20:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.671 [2024-07-24 20:00:42.463254] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:20:54.671 [2024-07-24 20:00:42.463300] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3715862 ] 00:20:54.671 EAL: No free 2048 kB hugepages reported on node 1 00:20:54.671 [2024-07-24 20:00:42.535969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.671 [2024-07-24 20:00:42.589409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:55.613 20:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:55.613 20:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:55.613 20:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LBIMIM1KRA 00:20:55.613 20:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:55.613 [2024-07-24 20:00:43.555682] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:55.874 nvme0n1 00:20:55.874 20:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:55.874 Running I/O for 1 seconds... 00:20:57.258 00:20:57.258 Latency(us) 00:20:57.258 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.258 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:57.258 Verification LBA range: start 0x0 length 0x2000 00:20:57.258 nvme0n1 : 1.05 1956.77 7.64 0.00 0.00 63881.44 4833.28 131072.00 00:20:57.258 =================================================================================================================== 00:20:57.258 Total : 1956.77 7.64 0.00 0.00 63881.44 4833.28 131072.00 00:20:57.258 0 00:20:57.258 20:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:20:57.258 20:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.258 20:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.258 20:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.258 20:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:20:57.258 "subsystems": [ 00:20:57.258 { 00:20:57.258 "subsystem": "keyring", 00:20:57.258 "config": [ 00:20:57.258 { 00:20:57.258 "method": "keyring_file_add_key", 00:20:57.258 "params": { 00:20:57.258 "name": "key0", 00:20:57.258 "path": "/tmp/tmp.LBIMIM1KRA" 00:20:57.258 } 00:20:57.258 } 00:20:57.258 ] 00:20:57.258 }, 00:20:57.258 { 00:20:57.258 "subsystem": "iobuf", 00:20:57.258 "config": [ 00:20:57.258 { 00:20:57.258 "method": "iobuf_set_options", 00:20:57.258 "params": { 00:20:57.258 "small_pool_count": 8192, 00:20:57.258 "large_pool_count": 1024, 00:20:57.258 "small_bufsize": 8192, 00:20:57.258 "large_bufsize": 135168 00:20:57.258 } 00:20:57.258 } 00:20:57.258 ] 00:20:57.258 }, 00:20:57.258 { 00:20:57.258 "subsystem": "sock", 00:20:57.258 "config": [ 00:20:57.258 { 00:20:57.258 "method": "sock_set_default_impl", 00:20:57.258 "params": { 00:20:57.258 "impl_name": "posix" 00:20:57.258 } 00:20:57.258 }, 00:20:57.258 { 00:20:57.258 "method": "sock_impl_set_options", 00:20:57.258 "params": { 00:20:57.258 "impl_name": "ssl", 00:20:57.258 "recv_buf_size": 4096, 00:20:57.258 "send_buf_size": 4096, 00:20:57.258 "enable_recv_pipe": true, 00:20:57.258 "enable_quickack": false, 00:20:57.258 "enable_placement_id": 0, 00:20:57.258 "enable_zerocopy_send_server": true, 00:20:57.258 "enable_zerocopy_send_client": false, 00:20:57.258 "zerocopy_threshold": 0, 00:20:57.258 "tls_version": 0, 00:20:57.258 "enable_ktls": false 00:20:57.258 } 00:20:57.258 }, 00:20:57.258 { 00:20:57.258 "method": "sock_impl_set_options", 00:20:57.258 "params": { 00:20:57.258 "impl_name": "posix", 00:20:57.258 "recv_buf_size": 2097152, 00:20:57.258 "send_buf_size": 2097152, 00:20:57.258 "enable_recv_pipe": true, 00:20:57.258 "enable_quickack": false, 00:20:57.258 "enable_placement_id": 0, 00:20:57.258 "enable_zerocopy_send_server": true, 00:20:57.258 "enable_zerocopy_send_client": false, 00:20:57.258 "zerocopy_threshold": 0, 00:20:57.258 "tls_version": 0, 00:20:57.258 "enable_ktls": false 00:20:57.258 } 00:20:57.258 } 00:20:57.258 ] 00:20:57.258 }, 00:20:57.258 { 00:20:57.258 "subsystem": "vmd", 00:20:57.258 "config": [] 00:20:57.258 }, 00:20:57.258 { 00:20:57.258 "subsystem": "accel", 00:20:57.258 "config": [ 00:20:57.258 { 00:20:57.258 "method": "accel_set_options", 00:20:57.258 "params": { 00:20:57.258 "small_cache_size": 128, 00:20:57.258 "large_cache_size": 16, 00:20:57.258 "task_count": 2048, 00:20:57.258 "sequence_count": 2048, 00:20:57.258 "buf_count": 2048 00:20:57.258 } 00:20:57.258 } 00:20:57.258 ] 00:20:57.258 }, 00:20:57.258 { 00:20:57.258 "subsystem": "bdev", 00:20:57.258 "config": [ 00:20:57.258 { 00:20:57.258 "method": "bdev_set_options", 00:20:57.258 "params": { 00:20:57.258 "bdev_io_pool_size": 65535, 00:20:57.258 "bdev_io_cache_size": 256, 00:20:57.258 "bdev_auto_examine": true, 00:20:57.258 "iobuf_small_cache_size": 128, 00:20:57.258 "iobuf_large_cache_size": 16 00:20:57.258 } 00:20:57.258 }, 00:20:57.258 { 00:20:57.258 "method": "bdev_raid_set_options", 00:20:57.258 "params": { 00:20:57.258 "process_window_size_kb": 1024, 00:20:57.258 "process_max_bandwidth_mb_sec": 0 00:20:57.258 } 00:20:57.258 }, 00:20:57.258 { 00:20:57.258 "method": "bdev_iscsi_set_options", 00:20:57.258 "params": { 00:20:57.258 "timeout_sec": 30 00:20:57.258 } 00:20:57.258 }, 00:20:57.258 { 00:20:57.258 "method": "bdev_nvme_set_options", 00:20:57.258 "params": { 00:20:57.258 "action_on_timeout": "none", 00:20:57.258 "timeout_us": 0, 00:20:57.258 "timeout_admin_us": 0, 00:20:57.258 "keep_alive_timeout_ms": 10000, 00:20:57.258 "arbitration_burst": 0, 00:20:57.258 "low_priority_weight": 0, 00:20:57.258 "medium_priority_weight": 0, 00:20:57.258 "high_priority_weight": 0, 00:20:57.258 "nvme_adminq_poll_period_us": 10000, 00:20:57.258 "nvme_ioq_poll_period_us": 0, 00:20:57.258 "io_queue_requests": 0, 00:20:57.258 "delay_cmd_submit": true, 00:20:57.258 "transport_retry_count": 4, 00:20:57.258 "bdev_retry_count": 3, 00:20:57.258 "transport_ack_timeout": 0, 00:20:57.258 "ctrlr_loss_timeout_sec": 0, 00:20:57.258 "reconnect_delay_sec": 0, 00:20:57.258 "fast_io_fail_timeout_sec": 0, 00:20:57.258 "disable_auto_failback": false, 00:20:57.258 "generate_uuids": false, 00:20:57.258 "transport_tos": 0, 00:20:57.258 "nvme_error_stat": false, 00:20:57.258 "rdma_srq_size": 0, 00:20:57.258 "io_path_stat": false, 00:20:57.258 "allow_accel_sequence": false, 00:20:57.258 "rdma_max_cq_size": 0, 00:20:57.258 "rdma_cm_event_timeout_ms": 0, 00:20:57.258 "dhchap_digests": [ 00:20:57.258 "sha256", 00:20:57.258 "sha384", 00:20:57.258 "sha512" 00:20:57.258 ], 00:20:57.258 "dhchap_dhgroups": [ 00:20:57.258 "null", 00:20:57.259 "ffdhe2048", 00:20:57.259 "ffdhe3072", 00:20:57.259 "ffdhe4096", 00:20:57.259 "ffdhe6144", 00:20:57.259 "ffdhe8192" 00:20:57.259 ] 00:20:57.259 } 00:20:57.259 }, 00:20:57.259 { 00:20:57.259 "method": "bdev_nvme_set_hotplug", 00:20:57.259 "params": { 00:20:57.259 "period_us": 100000, 00:20:57.259 "enable": false 00:20:57.259 } 00:20:57.259 }, 00:20:57.259 { 00:20:57.259 "method": "bdev_malloc_create", 00:20:57.259 "params": { 00:20:57.259 "name": "malloc0", 00:20:57.259 "num_blocks": 8192, 00:20:57.259 "block_size": 4096, 00:20:57.259 "physical_block_size": 4096, 00:20:57.259 "uuid": "b562b951-7578-4471-8d5c-a04ccfefa0ba", 00:20:57.259 "optimal_io_boundary": 0, 00:20:57.259 "md_size": 0, 00:20:57.259 "dif_type": 0, 00:20:57.259 "dif_is_head_of_md": false, 00:20:57.259 "dif_pi_format": 0 00:20:57.259 } 00:20:57.259 }, 00:20:57.259 { 00:20:57.259 "method": "bdev_wait_for_examine" 00:20:57.259 } 00:20:57.259 ] 00:20:57.259 }, 00:20:57.259 { 00:20:57.259 "subsystem": "nbd", 00:20:57.259 "config": [] 00:20:57.259 }, 00:20:57.259 { 00:20:57.259 "subsystem": "scheduler", 00:20:57.259 "config": [ 00:20:57.259 { 00:20:57.259 "method": "framework_set_scheduler", 00:20:57.259 "params": { 00:20:57.259 "name": "static" 00:20:57.259 } 00:20:57.259 } 00:20:57.259 ] 00:20:57.259 }, 00:20:57.259 { 00:20:57.259 "subsystem": "nvmf", 00:20:57.259 "config": [ 00:20:57.259 { 00:20:57.259 "method": "nvmf_set_config", 00:20:57.259 "params": { 00:20:57.259 "discovery_filter": "match_any", 00:20:57.259 "admin_cmd_passthru": { 00:20:57.259 "identify_ctrlr": false 00:20:57.259 } 00:20:57.259 } 00:20:57.259 }, 00:20:57.259 { 00:20:57.259 "method": "nvmf_set_max_subsystems", 00:20:57.259 "params": { 00:20:57.259 "max_subsystems": 1024 00:20:57.259 } 00:20:57.259 }, 00:20:57.259 { 00:20:57.259 "method": "nvmf_set_crdt", 00:20:57.259 "params": { 00:20:57.259 "crdt1": 0, 00:20:57.259 "crdt2": 0, 00:20:57.259 "crdt3": 0 00:20:57.259 } 00:20:57.259 }, 00:20:57.259 { 00:20:57.259 "method": "nvmf_create_transport", 00:20:57.259 "params": { 00:20:57.259 "trtype": "TCP", 00:20:57.259 "max_queue_depth": 128, 00:20:57.259 "max_io_qpairs_per_ctrlr": 127, 00:20:57.259 "in_capsule_data_size": 4096, 00:20:57.259 "max_io_size": 131072, 00:20:57.259 "io_unit_size": 131072, 00:20:57.259 "max_aq_depth": 128, 00:20:57.259 "num_shared_buffers": 511, 00:20:57.259 "buf_cache_size": 4294967295, 00:20:57.259 "dif_insert_or_strip": false, 00:20:57.259 "zcopy": false, 00:20:57.259 "c2h_success": false, 00:20:57.259 "sock_priority": 0, 00:20:57.259 "abort_timeout_sec": 1, 00:20:57.259 "ack_timeout": 0, 00:20:57.259 "data_wr_pool_size": 0 00:20:57.259 } 00:20:57.259 }, 00:20:57.259 { 00:20:57.259 "method": "nvmf_create_subsystem", 00:20:57.259 "params": { 00:20:57.259 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.259 "allow_any_host": false, 00:20:57.259 "serial_number": "00000000000000000000", 00:20:57.259 "model_number": "SPDK bdev Controller", 00:20:57.259 "max_namespaces": 32, 00:20:57.259 "min_cntlid": 1, 00:20:57.259 "max_cntlid": 65519, 00:20:57.259 "ana_reporting": false 00:20:57.259 } 00:20:57.259 }, 00:20:57.259 { 00:20:57.259 "method": "nvmf_subsystem_add_host", 00:20:57.259 "params": { 00:20:57.259 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.259 "host": "nqn.2016-06.io.spdk:host1", 00:20:57.259 "psk": "key0" 00:20:57.259 } 00:20:57.259 }, 00:20:57.259 { 00:20:57.259 "method": "nvmf_subsystem_add_ns", 00:20:57.259 "params": { 00:20:57.259 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.259 "namespace": { 00:20:57.259 "nsid": 1, 00:20:57.259 "bdev_name": "malloc0", 00:20:57.259 "nguid": "B562B951757844718D5CA04CCFEFA0BA", 00:20:57.259 "uuid": "b562b951-7578-4471-8d5c-a04ccfefa0ba", 00:20:57.259 "no_auto_visible": false 00:20:57.259 } 00:20:57.259 } 00:20:57.259 }, 00:20:57.259 { 00:20:57.259 "method": "nvmf_subsystem_add_listener", 00:20:57.259 "params": { 00:20:57.259 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.259 "listen_address": { 00:20:57.259 "trtype": "TCP", 00:20:57.259 "adrfam": "IPv4", 00:20:57.259 "traddr": "10.0.0.2", 00:20:57.259 "trsvcid": "4420" 00:20:57.259 }, 00:20:57.259 "secure_channel": false, 00:20:57.259 "sock_impl": "ssl" 00:20:57.259 } 00:20:57.259 } 00:20:57.259 ] 00:20:57.259 } 00:20:57.259 ] 00:20:57.259 }' 00:20:57.259 20:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:57.259 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:20:57.259 "subsystems": [ 00:20:57.259 { 00:20:57.259 "subsystem": "keyring", 00:20:57.259 "config": [ 00:20:57.259 { 00:20:57.259 "method": "keyring_file_add_key", 00:20:57.259 "params": { 00:20:57.259 "name": "key0", 00:20:57.259 "path": "/tmp/tmp.LBIMIM1KRA" 00:20:57.259 } 00:20:57.259 } 00:20:57.259 ] 00:20:57.259 }, 00:20:57.259 { 00:20:57.259 "subsystem": "iobuf", 00:20:57.259 "config": [ 00:20:57.259 { 00:20:57.259 "method": "iobuf_set_options", 00:20:57.259 "params": { 00:20:57.259 "small_pool_count": 8192, 00:20:57.259 "large_pool_count": 1024, 00:20:57.259 "small_bufsize": 8192, 00:20:57.259 "large_bufsize": 135168 00:20:57.259 } 00:20:57.259 } 00:20:57.259 ] 00:20:57.259 }, 00:20:57.259 { 00:20:57.259 "subsystem": "sock", 00:20:57.259 "config": [ 00:20:57.259 { 00:20:57.259 "method": "sock_set_default_impl", 00:20:57.259 "params": { 00:20:57.259 "impl_name": "posix" 00:20:57.259 } 00:20:57.259 }, 00:20:57.259 { 00:20:57.259 "method": "sock_impl_set_options", 00:20:57.259 "params": { 00:20:57.259 "impl_name": "ssl", 00:20:57.259 "recv_buf_size": 4096, 00:20:57.259 "send_buf_size": 4096, 00:20:57.259 "enable_recv_pipe": true, 00:20:57.259 "enable_quickack": false, 00:20:57.259 "enable_placement_id": 0, 00:20:57.259 "enable_zerocopy_send_server": true, 00:20:57.259 "enable_zerocopy_send_client": false, 00:20:57.259 "zerocopy_threshold": 0, 00:20:57.259 "tls_version": 0, 00:20:57.259 "enable_ktls": false 00:20:57.259 } 00:20:57.259 }, 00:20:57.259 { 00:20:57.259 "method": "sock_impl_set_options", 00:20:57.259 "params": { 00:20:57.259 "impl_name": "posix", 00:20:57.259 "recv_buf_size": 2097152, 00:20:57.259 "send_buf_size": 2097152, 00:20:57.259 "enable_recv_pipe": true, 00:20:57.259 "enable_quickack": false, 00:20:57.259 "enable_placement_id": 0, 00:20:57.259 "enable_zerocopy_send_server": true, 00:20:57.259 "enable_zerocopy_send_client": false, 00:20:57.259 "zerocopy_threshold": 0, 00:20:57.259 "tls_version": 0, 00:20:57.259 "enable_ktls": false 00:20:57.259 } 00:20:57.259 } 00:20:57.259 ] 00:20:57.259 }, 00:20:57.259 { 00:20:57.259 "subsystem": "vmd", 00:20:57.259 "config": [] 00:20:57.259 }, 00:20:57.259 { 00:20:57.259 "subsystem": "accel", 00:20:57.259 "config": [ 00:20:57.259 { 00:20:57.259 "method": "accel_set_options", 00:20:57.259 "params": { 00:20:57.259 "small_cache_size": 128, 00:20:57.259 "large_cache_size": 16, 00:20:57.259 "task_count": 2048, 00:20:57.259 "sequence_count": 2048, 00:20:57.259 "buf_count": 2048 00:20:57.259 } 00:20:57.259 } 00:20:57.259 ] 00:20:57.259 }, 00:20:57.259 { 00:20:57.259 "subsystem": "bdev", 00:20:57.259 "config": [ 00:20:57.259 { 00:20:57.259 "method": "bdev_set_options", 00:20:57.259 "params": { 00:20:57.259 "bdev_io_pool_size": 65535, 00:20:57.259 "bdev_io_cache_size": 256, 00:20:57.259 "bdev_auto_examine": true, 00:20:57.259 "iobuf_small_cache_size": 128, 00:20:57.259 "iobuf_large_cache_size": 16 00:20:57.259 } 00:20:57.259 }, 00:20:57.259 { 00:20:57.259 "method": "bdev_raid_set_options", 00:20:57.259 "params": { 00:20:57.259 "process_window_size_kb": 1024, 00:20:57.259 "process_max_bandwidth_mb_sec": 0 00:20:57.259 } 00:20:57.259 }, 00:20:57.259 { 00:20:57.259 "method": "bdev_iscsi_set_options", 00:20:57.259 "params": { 00:20:57.259 "timeout_sec": 30 00:20:57.259 } 00:20:57.259 }, 00:20:57.259 { 00:20:57.259 "method": "bdev_nvme_set_options", 00:20:57.259 "params": { 00:20:57.259 "action_on_timeout": "none", 00:20:57.259 "timeout_us": 0, 00:20:57.259 "timeout_admin_us": 0, 00:20:57.259 "keep_alive_timeout_ms": 10000, 00:20:57.259 "arbitration_burst": 0, 00:20:57.259 "low_priority_weight": 0, 00:20:57.259 "medium_priority_weight": 0, 00:20:57.259 "high_priority_weight": 0, 00:20:57.259 "nvme_adminq_poll_period_us": 10000, 00:20:57.259 "nvme_ioq_poll_period_us": 0, 00:20:57.259 "io_queue_requests": 512, 00:20:57.259 "delay_cmd_submit": true, 00:20:57.259 "transport_retry_count": 4, 00:20:57.259 "bdev_retry_count": 3, 00:20:57.259 "transport_ack_timeout": 0, 00:20:57.259 "ctrlr_loss_timeout_sec": 0, 00:20:57.259 "reconnect_delay_sec": 0, 00:20:57.259 "fast_io_fail_timeout_sec": 0, 00:20:57.259 "disable_auto_failback": false, 00:20:57.259 "generate_uuids": false, 00:20:57.259 "transport_tos": 0, 00:20:57.259 "nvme_error_stat": false, 00:20:57.259 "rdma_srq_size": 0, 00:20:57.259 "io_path_stat": false, 00:20:57.259 "allow_accel_sequence": false, 00:20:57.259 "rdma_max_cq_size": 0, 00:20:57.259 "rdma_cm_event_timeout_ms": 0, 00:20:57.259 "dhchap_digests": [ 00:20:57.259 "sha256", 00:20:57.259 "sha384", 00:20:57.259 "sha512" 00:20:57.259 ], 00:20:57.259 "dhchap_dhgroups": [ 00:20:57.259 "null", 00:20:57.259 "ffdhe2048", 00:20:57.259 "ffdhe3072", 00:20:57.259 "ffdhe4096", 00:20:57.259 "ffdhe6144", 00:20:57.259 "ffdhe8192" 00:20:57.259 ] 00:20:57.259 } 00:20:57.259 }, 00:20:57.259 { 00:20:57.259 "method": "bdev_nvme_attach_controller", 00:20:57.259 "params": { 00:20:57.259 "name": "nvme0", 00:20:57.259 "trtype": "TCP", 00:20:57.259 "adrfam": "IPv4", 00:20:57.259 "traddr": "10.0.0.2", 00:20:57.259 "trsvcid": "4420", 00:20:57.259 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.259 "prchk_reftag": false, 00:20:57.259 "prchk_guard": false, 00:20:57.259 "ctrlr_loss_timeout_sec": 0, 00:20:57.259 "reconnect_delay_sec": 0, 00:20:57.259 "fast_io_fail_timeout_sec": 0, 00:20:57.259 "psk": "key0", 00:20:57.259 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:57.259 "hdgst": false, 00:20:57.259 "ddgst": false 00:20:57.259 } 00:20:57.259 }, 00:20:57.259 { 00:20:57.259 "method": "bdev_nvme_set_hotplug", 00:20:57.259 "params": { 00:20:57.259 "period_us": 100000, 00:20:57.259 "enable": false 00:20:57.259 } 00:20:57.259 }, 00:20:57.259 { 00:20:57.259 "method": "bdev_enable_histogram", 00:20:57.259 "params": { 00:20:57.259 "name": "nvme0n1", 00:20:57.259 "enable": true 00:20:57.259 } 00:20:57.259 }, 00:20:57.259 { 00:20:57.259 "method": "bdev_wait_for_examine" 00:20:57.259 } 00:20:57.259 ] 00:20:57.259 }, 00:20:57.259 { 00:20:57.259 "subsystem": "nbd", 00:20:57.259 "config": [] 00:20:57.259 } 00:20:57.259 ] 00:20:57.259 }' 00:20:57.259 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 3715862 00:20:57.259 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3715862 ']' 00:20:57.259 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3715862 00:20:57.259 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:57.259 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:57.259 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3715862 00:20:57.520 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:57.520 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:57.520 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3715862' 00:20:57.520 killing process with pid 3715862 00:20:57.520 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3715862 00:20:57.520 Received shutdown signal, test time was about 1.000000 seconds 00:20:57.520 00:20:57.520 Latency(us) 00:20:57.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.520 =================================================================================================================== 00:20:57.520 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:57.520 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3715862 00:20:57.520 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 3715696 00:20:57.520 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3715696 ']' 00:20:57.520 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3715696 00:20:57.520 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:57.520 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:57.520 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3715696 00:20:57.520 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:57.520 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:57.520 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3715696' 00:20:57.520 killing process with pid 3715696 00:20:57.520 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3715696 00:20:57.520 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3715696 00:20:57.782 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:20:57.782 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:57.782 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:57.782 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:20:57.782 "subsystems": [ 00:20:57.782 { 00:20:57.782 "subsystem": "keyring", 00:20:57.782 "config": [ 00:20:57.782 { 00:20:57.782 "method": "keyring_file_add_key", 00:20:57.782 "params": { 00:20:57.782 "name": "key0", 00:20:57.782 "path": "/tmp/tmp.LBIMIM1KRA" 00:20:57.782 } 00:20:57.782 } 00:20:57.782 ] 00:20:57.782 }, 00:20:57.782 { 00:20:57.782 "subsystem": "iobuf", 00:20:57.782 "config": [ 00:20:57.782 { 00:20:57.782 "method": "iobuf_set_options", 00:20:57.782 "params": { 00:20:57.782 "small_pool_count": 8192, 00:20:57.782 "large_pool_count": 1024, 00:20:57.782 "small_bufsize": 8192, 00:20:57.782 "large_bufsize": 135168 00:20:57.782 } 00:20:57.782 } 00:20:57.782 ] 00:20:57.782 }, 00:20:57.782 { 00:20:57.782 "subsystem": "sock", 00:20:57.782 "config": [ 00:20:57.782 { 00:20:57.782 "method": "sock_set_default_impl", 00:20:57.782 "params": { 00:20:57.782 "impl_name": "posix" 00:20:57.782 } 00:20:57.782 }, 00:20:57.782 { 00:20:57.782 "method": "sock_impl_set_options", 00:20:57.782 "params": { 00:20:57.782 "impl_name": "ssl", 00:20:57.782 "recv_buf_size": 4096, 00:20:57.782 "send_buf_size": 4096, 00:20:57.782 "enable_recv_pipe": true, 00:20:57.782 "enable_quickack": false, 00:20:57.782 "enable_placement_id": 0, 00:20:57.782 "enable_zerocopy_send_server": true, 00:20:57.782 "enable_zerocopy_send_client": false, 00:20:57.782 "zerocopy_threshold": 0, 00:20:57.782 "tls_version": 0, 00:20:57.782 "enable_ktls": false 00:20:57.782 } 00:20:57.782 }, 00:20:57.782 { 00:20:57.782 "method": "sock_impl_set_options", 00:20:57.782 "params": { 00:20:57.782 "impl_name": "posix", 00:20:57.782 "recv_buf_size": 2097152, 00:20:57.782 "send_buf_size": 2097152, 00:20:57.782 "enable_recv_pipe": true, 00:20:57.782 "enable_quickack": false, 00:20:57.782 "enable_placement_id": 0, 00:20:57.782 "enable_zerocopy_send_server": true, 00:20:57.782 "enable_zerocopy_send_client": false, 00:20:57.782 "zerocopy_threshold": 0, 00:20:57.782 "tls_version": 0, 00:20:57.782 "enable_ktls": false 00:20:57.782 } 00:20:57.782 } 00:20:57.782 ] 00:20:57.782 }, 00:20:57.782 { 00:20:57.782 "subsystem": "vmd", 00:20:57.782 "config": [] 00:20:57.782 }, 00:20:57.782 { 00:20:57.782 "subsystem": "accel", 00:20:57.782 "config": [ 00:20:57.782 { 00:20:57.782 "method": "accel_set_options", 00:20:57.782 "params": { 00:20:57.782 "small_cache_size": 128, 00:20:57.782 "large_cache_size": 16, 00:20:57.782 "task_count": 2048, 00:20:57.782 "sequence_count": 2048, 00:20:57.782 "buf_count": 2048 00:20:57.782 } 00:20:57.782 } 00:20:57.782 ] 00:20:57.782 }, 00:20:57.783 { 00:20:57.783 "subsystem": "bdev", 00:20:57.783 "config": [ 00:20:57.783 { 00:20:57.783 "method": "bdev_set_options", 00:20:57.783 "params": { 00:20:57.783 "bdev_io_pool_size": 65535, 00:20:57.783 "bdev_io_cache_size": 256, 00:20:57.783 "bdev_auto_examine": true, 00:20:57.783 "iobuf_small_cache_size": 128, 00:20:57.783 "iobuf_large_cache_size": 16 00:20:57.783 } 00:20:57.783 }, 00:20:57.783 { 00:20:57.783 "method": "bdev_raid_set_options", 00:20:57.783 "params": { 00:20:57.783 "process_window_size_kb": 1024, 00:20:57.783 "process_max_bandwidth_mb_sec": 0 00:20:57.783 } 00:20:57.783 }, 00:20:57.783 { 00:20:57.783 "method": "bdev_iscsi_set_options", 00:20:57.783 "params": { 00:20:57.783 "timeout_sec": 30 00:20:57.783 } 00:20:57.783 }, 00:20:57.783 { 00:20:57.783 "method": "bdev_nvme_set_options", 00:20:57.783 "params": { 00:20:57.783 "action_on_timeout": "none", 00:20:57.783 "timeout_us": 0, 00:20:57.783 "timeout_admin_us": 0, 00:20:57.783 "keep_alive_timeout_ms": 10000, 00:20:57.783 "arbitration_burst": 0, 00:20:57.783 "low_priority_weight": 0, 00:20:57.783 "medium_priority_weight": 0, 00:20:57.783 "high_priority_weight": 0, 00:20:57.783 "nvme_adminq_poll_period_us": 10000, 00:20:57.783 "nvme_ioq_poll_period_us": 0, 00:20:57.783 "io_queue_requests": 0, 00:20:57.783 "delay_cmd_submit": true, 00:20:57.783 "transport_retry_count": 4, 00:20:57.783 "bdev_retry_count": 3, 00:20:57.783 "transport_ack_timeout": 0, 00:20:57.783 "ctrlr_loss_timeout_sec": 0, 00:20:57.783 "reconnect_delay_sec": 0, 00:20:57.783 "fast_io_fail_timeout_sec": 0, 00:20:57.783 "disable_auto_failback": false, 00:20:57.783 "generate_uuids": false, 00:20:57.783 "transport_tos": 0, 00:20:57.783 "nvme_error_stat": false, 00:20:57.783 "rdma_srq_size": 0, 00:20:57.783 "io_path_stat": false, 00:20:57.783 "allow_accel_sequence": false, 00:20:57.783 "rdma_max_cq_size": 0, 00:20:57.783 "rdma_cm_event_timeout_ms": 0, 00:20:57.783 "dhchap_digests": [ 00:20:57.783 "sha256", 00:20:57.783 "sha384", 00:20:57.783 "sha512" 00:20:57.783 ], 00:20:57.783 "dhchap_dhgroups": [ 00:20:57.783 "null", 00:20:57.783 "ffdhe2048", 00:20:57.783 "ffdhe3072", 00:20:57.783 "ffdhe4096", 00:20:57.783 "ffdhe6144", 00:20:57.783 "ffdhe8192" 00:20:57.783 ] 00:20:57.783 } 00:20:57.783 }, 00:20:57.783 { 00:20:57.783 "method": "bdev_nvme_set_hotplug", 00:20:57.783 "params": { 00:20:57.783 "period_us": 100000, 00:20:57.783 "enable": false 00:20:57.783 } 00:20:57.783 }, 00:20:57.783 { 00:20:57.783 "method": "bdev_malloc_create", 00:20:57.783 "params": { 00:20:57.783 "name": "malloc0", 00:20:57.783 "num_blocks": 8192, 00:20:57.783 "block_size": 4096, 00:20:57.783 "physical_block_size": 4096, 00:20:57.783 "uuid": "b562b951-7578-4471-8d5c-a04ccfefa0ba", 00:20:57.783 "optimal_io_boundary": 0, 00:20:57.783 "md_size": 0, 00:20:57.783 "dif_type": 0, 00:20:57.783 "dif_is_head_of_md": false, 00:20:57.783 "dif_pi_format": 0 00:20:57.783 } 00:20:57.783 }, 00:20:57.783 { 00:20:57.783 "method": "bdev_wait_for_examine" 00:20:57.783 } 00:20:57.783 ] 00:20:57.783 }, 00:20:57.783 { 00:20:57.783 "subsystem": "nbd", 00:20:57.783 "config": [] 00:20:57.783 }, 00:20:57.783 { 00:20:57.783 "subsystem": "scheduler", 00:20:57.783 "config": [ 00:20:57.783 { 00:20:57.783 "method": "framework_set_scheduler", 00:20:57.783 "params": { 00:20:57.783 "name": "static" 00:20:57.783 } 00:20:57.783 } 00:20:57.783 ] 00:20:57.783 }, 00:20:57.783 { 00:20:57.783 "subsystem": "nvmf", 00:20:57.783 "config": [ 00:20:57.783 { 00:20:57.783 "method": "nvmf_set_config", 00:20:57.783 "params": { 00:20:57.783 "discovery_filter": "match_any", 00:20:57.783 "admin_cmd_passthru": { 00:20:57.783 "identify_ctrlr": false 00:20:57.783 } 00:20:57.783 } 00:20:57.783 }, 00:20:57.783 { 00:20:57.783 "method": "nvmf_set_max_subsystems", 00:20:57.783 "params": { 00:20:57.783 "max_subsystems": 1024 00:20:57.783 } 00:20:57.783 }, 00:20:57.783 { 00:20:57.783 "method": "nvmf_set_crdt", 00:20:57.783 "params": { 00:20:57.783 "crdt1": 0, 00:20:57.783 "crdt2": 0, 00:20:57.783 "crdt3": 0 00:20:57.783 } 00:20:57.783 }, 00:20:57.783 { 00:20:57.783 "method": "nvmf_create_transport", 00:20:57.783 "params": { 00:20:57.783 "trtype": "TCP", 00:20:57.783 "max_queue_depth": 128, 00:20:57.783 "max_io_qpairs_per_ctrlr": 127, 00:20:57.783 "in_capsule_data_size": 4096, 00:20:57.783 "max_io_size": 131072, 00:20:57.783 "io_unit_size": 131072, 00:20:57.783 "max_aq_depth": 128, 00:20:57.783 "num_shared_buffers": 511, 00:20:57.783 "buf_cache_size": 4294967295, 00:20:57.783 "dif_insert_or_strip": false, 00:20:57.783 "zcopy": false, 00:20:57.783 "c2h_success": false, 00:20:57.783 "sock_priority": 0, 00:20:57.783 "abort_timeout_sec": 1, 00:20:57.783 "ack_timeout": 0, 00:20:57.783 "data_wr_pool_size": 0 00:20:57.783 } 00:20:57.783 }, 00:20:57.783 { 00:20:57.783 "method": "nvmf_create_subsystem", 00:20:57.783 "params": { 00:20:57.783 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.783 "allow_any_host": false, 00:20:57.783 "serial_number": "00000000000000000000", 00:20:57.783 "model_number": "SPDK bdev Controller", 00:20:57.783 "max_namespaces": 32, 00:20:57.783 "min_cntlid": 1, 00:20:57.783 "max_cntlid": 65519, 00:20:57.783 "ana_reporting": false 00:20:57.783 } 00:20:57.783 }, 00:20:57.783 { 00:20:57.783 "method": "nvmf_subsystem_add_host", 00:20:57.783 "params": { 00:20:57.783 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.783 "host": "nqn.2016-06.io.spdk:host1", 00:20:57.783 "psk": "key0" 00:20:57.783 } 00:20:57.783 }, 00:20:57.783 { 00:20:57.783 "method": "nvmf_subsystem_add_ns", 00:20:57.783 "params": { 00:20:57.783 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.783 "namespace": { 00:20:57.783 "nsid": 1, 00:20:57.783 "bdev_name": "malloc0", 00:20:57.783 "nguid": "B562B951757844718D5CA04CCFEFA0BA", 00:20:57.783 "uuid": "b562b951-7578-4471-8d5c-a04ccfefa0ba", 00:20:57.783 "no_auto_visible": false 00:20:57.783 } 00:20:57.783 } 00:20:57.783 }, 00:20:57.783 { 00:20:57.783 "method": "nvmf_subsystem_add_listener", 00:20:57.783 "params": { 00:20:57.783 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.783 "listen_address": { 00:20:57.783 "trtype": "TCP", 00:20:57.783 "adrfam": "IPv4", 00:20:57.783 "traddr": "10.0.0.2", 00:20:57.783 "trsvcid": "4420" 00:20:57.783 }, 00:20:57.783 "secure_channel": false, 00:20:57.783 "sock_impl": "ssl" 00:20:57.783 } 00:20:57.783 } 00:20:57.783 ] 00:20:57.783 } 00:20:57.783 ] 00:20:57.783 }' 00:20:57.783 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.783 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3716416 00:20:57.783 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3716416 00:20:57.783 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:57.783 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3716416 ']' 00:20:57.783 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.783 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:57.783 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.783 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:57.783 20:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.783 [2024-07-24 20:00:45.594720] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:20:57.783 [2024-07-24 20:00:45.594778] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:57.783 EAL: No free 2048 kB hugepages reported on node 1 00:20:57.783 [2024-07-24 20:00:45.659464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.783 [2024-07-24 20:00:45.724844] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:57.783 [2024-07-24 20:00:45.724883] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:57.783 [2024-07-24 20:00:45.724891] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:57.783 [2024-07-24 20:00:45.724902] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:57.783 [2024-07-24 20:00:45.724907] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:57.783 [2024-07-24 20:00:45.724956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.045 [2024-07-24 20:00:45.922391] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:58.045 [2024-07-24 20:00:45.964247] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:58.045 [2024-07-24 20:00:45.964447] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:58.619 20:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:58.619 20:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:58.619 20:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:58.619 20:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:58.619 20:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.619 20:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:58.619 20:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=3716760 00:20:58.619 20:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 3716760 /var/tmp/bdevperf.sock 00:20:58.619 20:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3716760 ']' 00:20:58.619 20:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:58.619 20:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:58.619 20:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:58.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:58.619 20:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:58.619 20:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:58.619 20:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.619 20:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:20:58.619 "subsystems": [ 00:20:58.619 { 00:20:58.619 "subsystem": "keyring", 00:20:58.619 "config": [ 00:20:58.619 { 00:20:58.619 "method": "keyring_file_add_key", 00:20:58.619 "params": { 00:20:58.619 "name": "key0", 00:20:58.619 "path": "/tmp/tmp.LBIMIM1KRA" 00:20:58.619 } 00:20:58.619 } 00:20:58.619 ] 00:20:58.619 }, 00:20:58.619 { 00:20:58.619 "subsystem": "iobuf", 00:20:58.619 "config": [ 00:20:58.619 { 00:20:58.619 "method": "iobuf_set_options", 00:20:58.619 "params": { 00:20:58.619 "small_pool_count": 8192, 00:20:58.619 "large_pool_count": 1024, 00:20:58.619 "small_bufsize": 8192, 00:20:58.619 "large_bufsize": 135168 00:20:58.619 } 00:20:58.619 } 00:20:58.619 ] 00:20:58.619 }, 00:20:58.619 { 00:20:58.619 "subsystem": "sock", 00:20:58.619 "config": [ 00:20:58.619 { 00:20:58.619 "method": "sock_set_default_impl", 00:20:58.619 "params": { 00:20:58.619 "impl_name": "posix" 00:20:58.619 } 00:20:58.619 }, 00:20:58.619 { 00:20:58.619 "method": "sock_impl_set_options", 00:20:58.619 "params": { 00:20:58.619 "impl_name": "ssl", 00:20:58.619 "recv_buf_size": 4096, 00:20:58.619 "send_buf_size": 4096, 00:20:58.619 "enable_recv_pipe": true, 00:20:58.619 "enable_quickack": false, 00:20:58.619 "enable_placement_id": 0, 00:20:58.619 "enable_zerocopy_send_server": true, 00:20:58.619 "enable_zerocopy_send_client": false, 00:20:58.619 "zerocopy_threshold": 0, 00:20:58.619 "tls_version": 0, 00:20:58.619 "enable_ktls": false 00:20:58.619 } 00:20:58.619 }, 00:20:58.619 { 00:20:58.619 "method": "sock_impl_set_options", 00:20:58.619 "params": { 00:20:58.619 "impl_name": "posix", 00:20:58.619 "recv_buf_size": 2097152, 00:20:58.619 "send_buf_size": 2097152, 00:20:58.619 "enable_recv_pipe": true, 00:20:58.619 "enable_quickack": false, 00:20:58.619 "enable_placement_id": 0, 00:20:58.619 "enable_zerocopy_send_server": true, 00:20:58.619 "enable_zerocopy_send_client": false, 00:20:58.619 "zerocopy_threshold": 0, 00:20:58.619 "tls_version": 0, 00:20:58.619 "enable_ktls": false 00:20:58.619 } 00:20:58.619 } 00:20:58.619 ] 00:20:58.619 }, 00:20:58.619 { 00:20:58.619 "subsystem": "vmd", 00:20:58.619 "config": [] 00:20:58.619 }, 00:20:58.619 { 00:20:58.619 "subsystem": "accel", 00:20:58.619 "config": [ 00:20:58.619 { 00:20:58.619 "method": "accel_set_options", 00:20:58.619 "params": { 00:20:58.619 "small_cache_size": 128, 00:20:58.619 "large_cache_size": 16, 00:20:58.619 "task_count": 2048, 00:20:58.619 "sequence_count": 2048, 00:20:58.619 "buf_count": 2048 00:20:58.619 } 00:20:58.619 } 00:20:58.619 ] 00:20:58.619 }, 00:20:58.619 { 00:20:58.619 "subsystem": "bdev", 00:20:58.619 "config": [ 00:20:58.619 { 00:20:58.619 "method": "bdev_set_options", 00:20:58.619 "params": { 00:20:58.619 "bdev_io_pool_size": 65535, 00:20:58.619 "bdev_io_cache_size": 256, 00:20:58.619 "bdev_auto_examine": true, 00:20:58.619 "iobuf_small_cache_size": 128, 00:20:58.619 "iobuf_large_cache_size": 16 00:20:58.619 } 00:20:58.619 }, 00:20:58.619 { 00:20:58.619 "method": "bdev_raid_set_options", 00:20:58.619 "params": { 00:20:58.619 "process_window_size_kb": 1024, 00:20:58.619 "process_max_bandwidth_mb_sec": 0 00:20:58.619 } 00:20:58.619 }, 00:20:58.619 { 00:20:58.619 "method": "bdev_iscsi_set_options", 00:20:58.619 "params": { 00:20:58.619 "timeout_sec": 30 00:20:58.619 } 00:20:58.619 }, 00:20:58.619 { 00:20:58.619 "method": "bdev_nvme_set_options", 00:20:58.619 "params": { 00:20:58.619 "action_on_timeout": "none", 00:20:58.619 "timeout_us": 0, 00:20:58.619 "timeout_admin_us": 0, 00:20:58.619 "keep_alive_timeout_ms": 10000, 00:20:58.619 "arbitration_burst": 0, 00:20:58.619 "low_priority_weight": 0, 00:20:58.619 "medium_priority_weight": 0, 00:20:58.619 "high_priority_weight": 0, 00:20:58.619 "nvme_adminq_poll_period_us": 10000, 00:20:58.619 "nvme_ioq_poll_period_us": 0, 00:20:58.619 "io_queue_requests": 512, 00:20:58.619 "delay_cmd_submit": true, 00:20:58.619 "transport_retry_count": 4, 00:20:58.619 "bdev_retry_count": 3, 00:20:58.619 "transport_ack_timeout": 0, 00:20:58.619 "ctrlr_loss_timeout_sec": 0, 00:20:58.619 "reconnect_delay_sec": 0, 00:20:58.619 "fast_io_fail_timeout_sec": 0, 00:20:58.619 "disable_auto_failback": false, 00:20:58.619 "generate_uuids": false, 00:20:58.619 "transport_tos": 0, 00:20:58.619 "nvme_error_stat": false, 00:20:58.619 "rdma_srq_size": 0, 00:20:58.619 "io_path_stat": false, 00:20:58.619 "allow_accel_sequence": false, 00:20:58.619 "rdma_max_cq_size": 0, 00:20:58.619 "rdma_cm_event_timeout_ms": 0, 00:20:58.619 "dhchap_digests": [ 00:20:58.619 "sha256", 00:20:58.619 "sha384", 00:20:58.619 "sha512" 00:20:58.619 ], 00:20:58.619 "dhchap_dhgroups": [ 00:20:58.619 "null", 00:20:58.619 "ffdhe2048", 00:20:58.619 "ffdhe3072", 00:20:58.619 "ffdhe4096", 00:20:58.619 "ffdhe6144", 00:20:58.619 "ffdhe8192" 00:20:58.619 ] 00:20:58.619 } 00:20:58.619 }, 00:20:58.619 { 00:20:58.619 "method": "bdev_nvme_attach_controller", 00:20:58.619 "params": { 00:20:58.619 "name": "nvme0", 00:20:58.619 "trtype": "TCP", 00:20:58.619 "adrfam": "IPv4", 00:20:58.619 "traddr": "10.0.0.2", 00:20:58.619 "trsvcid": "4420", 00:20:58.619 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.619 "prchk_reftag": false, 00:20:58.619 "prchk_guard": false, 00:20:58.619 "ctrlr_loss_timeout_sec": 0, 00:20:58.619 "reconnect_delay_sec": 0, 00:20:58.619 "fast_io_fail_timeout_sec": 0, 00:20:58.619 "psk": "key0", 00:20:58.619 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:58.619 "hdgst": false, 00:20:58.619 "ddgst": false 00:20:58.619 } 00:20:58.619 }, 00:20:58.619 { 00:20:58.619 "method": "bdev_nvme_set_hotplug", 00:20:58.619 "params": { 00:20:58.619 "period_us": 100000, 00:20:58.619 "enable": false 00:20:58.619 } 00:20:58.619 }, 00:20:58.619 { 00:20:58.619 "method": "bdev_enable_histogram", 00:20:58.619 "params": { 00:20:58.619 "name": "nvme0n1", 00:20:58.619 "enable": true 00:20:58.619 } 00:20:58.619 }, 00:20:58.619 { 00:20:58.619 "method": "bdev_wait_for_examine" 00:20:58.619 } 00:20:58.619 ] 00:20:58.619 }, 00:20:58.619 { 00:20:58.619 "subsystem": "nbd", 00:20:58.619 "config": [] 00:20:58.619 } 00:20:58.619 ] 00:20:58.619 }' 00:20:58.619 [2024-07-24 20:00:46.448048] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:20:58.619 [2024-07-24 20:00:46.448100] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3716760 ] 00:20:58.619 EAL: No free 2048 kB hugepages reported on node 1 00:20:58.619 [2024-07-24 20:00:46.501419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.619 [2024-07-24 20:00:46.555008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:58.878 [2024-07-24 20:00:46.688535] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:59.448 20:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:59.448 20:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:59.448 20:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:59.448 20:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:20:59.448 20:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.448 20:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:59.706 Running I/O for 1 seconds... 00:21:00.642 00:21:00.642 Latency(us) 00:21:00.642 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.642 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:00.642 Verification LBA range: start 0x0 length 0x2000 00:21:00.642 nvme0n1 : 1.05 2016.17 7.88 0.00 0.00 62224.11 5789.01 87381.33 00:21:00.642 =================================================================================================================== 00:21:00.642 Total : 2016.17 7.88 0.00 0.00 62224.11 5789.01 87381.33 00:21:00.642 0 00:21:00.642 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:21:00.642 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:21:00.642 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:00.642 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:21:00.642 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:21:00.642 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:21:00.642 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:00.642 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:21:00.642 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:21:00.642 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:21:00.642 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:00.642 nvmf_trace.0 00:21:00.902 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:21:00.902 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3716760 00:21:00.902 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3716760 ']' 00:21:00.902 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3716760 00:21:00.902 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:00.902 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:00.902 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3716760 00:21:00.902 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:00.902 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:00.902 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3716760' 00:21:00.902 killing process with pid 3716760 00:21:00.902 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3716760 00:21:00.902 Received shutdown signal, test time was about 1.000000 seconds 00:21:00.902 00:21:00.902 Latency(us) 00:21:00.902 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.902 =================================================================================================================== 00:21:00.902 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:00.902 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3716760 00:21:00.902 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:00.902 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:00.902 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:21:00.902 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:00.902 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:21:00.902 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:00.902 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:00.902 rmmod nvme_tcp 00:21:00.902 rmmod nvme_fabrics 00:21:00.902 rmmod nvme_keyring 00:21:01.162 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:01.162 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:21:01.162 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:21:01.162 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3716416 ']' 00:21:01.162 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3716416 00:21:01.162 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3716416 ']' 00:21:01.162 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3716416 00:21:01.162 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:01.162 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:01.162 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3716416 00:21:01.162 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:01.162 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:01.162 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3716416' 00:21:01.162 killing process with pid 3716416 00:21:01.162 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3716416 00:21:01.162 20:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3716416 00:21:01.162 20:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:01.162 20:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:01.162 20:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:01.162 20:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:01.163 20:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:01.163 20:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.163 20:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:01.163 20:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.v14o9CVlcL /tmp/tmp.22PpUHi965 /tmp/tmp.LBIMIM1KRA 00:21:03.710 00:21:03.710 real 1m24.087s 00:21:03.710 user 2m7.069s 00:21:03.710 sys 0m29.261s 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:03.710 ************************************ 00:21:03.710 END TEST nvmf_tls 00:21:03.710 ************************************ 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:03.710 ************************************ 00:21:03.710 START TEST nvmf_fips 00:21:03.710 ************************************ 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:03.710 * Looking for test storage... 00:21:03.710 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:03.710 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:21:03.711 Error setting digest 00:21:03.711 00C2C0E0087F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:03.711 00C2C0E0087F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:21:03.711 20:00:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:11.938 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:11.938 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:21:11.938 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:11.938 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:11.938 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:11.938 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:11.938 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:11.938 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:21:11.938 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:11.938 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:21:11.938 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:21:11.938 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:21:11.938 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:21:11.938 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:21:11.938 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:21:11.938 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:11.938 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:11.938 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:11.938 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:11.938 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:11.938 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:11.938 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:11.938 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:11.938 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:11.938 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:11.938 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:11.938 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:11.939 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:11.939 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:11.939 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:11.939 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:11.939 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:11.939 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:21:11.939 00:21:11.939 --- 10.0.0.2 ping statistics --- 00:21:11.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.939 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:11.939 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:11.939 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.362 ms 00:21:11.939 00:21:11.939 --- 10.0.0.1 ping statistics --- 00:21:11.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.939 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3721475 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3721475 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 3721475 ']' 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:11.939 20:00:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:11.939 [2024-07-24 20:00:58.875357] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:21:11.939 [2024-07-24 20:00:58.875421] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:11.939 EAL: No free 2048 kB hugepages reported on node 1 00:21:11.939 [2024-07-24 20:00:58.961702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.939 [2024-07-24 20:00:59.053326] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:11.939 [2024-07-24 20:00:59.053389] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:11.939 [2024-07-24 20:00:59.053397] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:11.939 [2024-07-24 20:00:59.053404] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:11.939 [2024-07-24 20:00:59.053411] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:11.939 [2024-07-24 20:00:59.053445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:11.939 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:11.939 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:21:11.940 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:11.940 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:11.940 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:11.940 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:11.940 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:21:11.940 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:11.940 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:11.940 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:11.940 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:11.940 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:11.940 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:11.940 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:11.940 [2024-07-24 20:00:59.828973] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:11.940 [2024-07-24 20:00:59.844979] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:11.940 [2024-07-24 20:00:59.845293] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.940 [2024-07-24 20:00:59.875220] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:11.940 malloc0 00:21:12.204 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:12.204 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3721677 00:21:12.204 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3721677 /var/tmp/bdevperf.sock 00:21:12.204 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:12.204 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 3721677 ']' 00:21:12.204 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:12.204 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:12.204 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:12.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:12.204 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:12.204 20:00:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:12.204 [2024-07-24 20:00:59.985397] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:21:12.204 [2024-07-24 20:00:59.985473] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3721677 ] 00:21:12.204 EAL: No free 2048 kB hugepages reported on node 1 00:21:12.204 [2024-07-24 20:01:00.045082] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.204 [2024-07-24 20:01:00.114715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:13.144 20:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:13.144 20:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:21:13.144 20:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:13.144 [2024-07-24 20:01:00.872610] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:13.144 [2024-07-24 20:01:00.872678] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:13.144 TLSTESTn1 00:21:13.144 20:01:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:13.144 Running I/O for 10 seconds... 00:21:25.384 00:21:25.384 Latency(us) 00:21:25.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.384 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:25.384 Verification LBA range: start 0x0 length 0x2000 00:21:25.384 TLSTESTn1 : 10.08 2259.29 8.83 0.00 0.00 56445.63 4833.28 107479.04 00:21:25.384 =================================================================================================================== 00:21:25.384 Total : 2259.29 8.83 0.00 0.00 56445.63 4833.28 107479.04 00:21:25.384 0 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:25.384 nvmf_trace.0 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3721677 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 3721677 ']' 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 3721677 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3721677 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3721677' 00:21:25.384 killing process with pid 3721677 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 3721677 00:21:25.384 Received shutdown signal, test time was about 10.000000 seconds 00:21:25.384 00:21:25.384 Latency(us) 00:21:25.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.384 =================================================================================================================== 00:21:25.384 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:25.384 [2024-07-24 20:01:11.318629] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 3721677 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:25.384 rmmod nvme_tcp 00:21:25.384 rmmod nvme_fabrics 00:21:25.384 rmmod nvme_keyring 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3721475 ']' 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3721475 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 3721475 ']' 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 3721475 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3721475 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3721475' 00:21:25.384 killing process with pid 3721475 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 3721475 00:21:25.384 [2024-07-24 20:01:11.560898] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 3721475 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:25.384 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:25.385 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.385 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:25.385 20:01:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.954 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:25.954 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:25.954 00:21:25.954 real 0m22.536s 00:21:25.954 user 0m22.910s 00:21:25.954 sys 0m10.284s 00:21:25.954 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:25.954 20:01:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:25.954 ************************************ 00:21:25.954 END TEST nvmf_fips 00:21:25.954 ************************************ 00:21:25.954 20:01:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:21:25.954 20:01:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:21:25.954 20:01:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:21:25.954 20:01:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:21:25.955 20:01:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:21:25.955 20:01:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:32.541 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:32.541 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:32.541 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:32.541 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:32.541 ************************************ 00:21:32.541 START TEST nvmf_perf_adq 00:21:32.541 ************************************ 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:32.541 * Looking for test storage... 00:21:32.541 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:32.541 20:01:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:40.683 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:40.683 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:40.683 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:40.683 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:21:40.683 20:01:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:21:40.945 20:01:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:21:42.859 20:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:48.233 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:48.233 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:48.233 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:48.233 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:48.233 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:48.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:48.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:21:48.234 00:21:48.234 --- 10.0.0.2 ping statistics --- 00:21:48.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.234 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:48.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:48.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.376 ms 00:21:48.234 00:21:48.234 --- 10.0.0.1 ping statistics --- 00:21:48.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.234 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3733387 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3733387 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 3733387 ']' 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:48.234 20:01:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:48.234 [2024-07-24 20:01:35.999745] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:21:48.234 [2024-07-24 20:01:35.999834] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:48.234 EAL: No free 2048 kB hugepages reported on node 1 00:21:48.234 [2024-07-24 20:01:36.074523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:48.234 [2024-07-24 20:01:36.150709] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:48.234 [2024-07-24 20:01:36.150749] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:48.234 [2024-07-24 20:01:36.150757] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:48.234 [2024-07-24 20:01:36.150764] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:48.234 [2024-07-24 20:01:36.150769] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:48.234 [2024-07-24 20:01:36.150906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:48.234 [2024-07-24 20:01:36.151040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:48.234 [2024-07-24 20:01:36.151244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:48.234 [2024-07-24 20:01:36.151257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.181 20:01:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:49.181 20:01:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:21:49.181 20:01:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:49.181 20:01:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:49.181 20:01:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.181 20:01:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.181 20:01:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:21:49.181 20:01:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:49.181 20:01:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:49.181 20:01:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.181 20:01:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.181 20:01:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.181 20:01:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:49.181 20:01:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:49.181 20:01:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.181 20:01:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.181 20:01:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.181 20:01:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:49.181 20:01:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.181 20:01:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.181 20:01:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.181 20:01:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:49.181 20:01:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.181 20:01:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.181 [2024-07-24 20:01:36.961517] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:49.181 20:01:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.181 20:01:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:49.181 20:01:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.181 20:01:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.181 Malloc1 00:21:49.181 20:01:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.181 20:01:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:49.181 20:01:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.181 20:01:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.181 20:01:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.181 20:01:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:49.181 20:01:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.181 20:01:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.181 20:01:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.182 20:01:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:49.182 20:01:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.182 20:01:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.182 [2024-07-24 20:01:37.020871] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.182 20:01:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.182 20:01:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3733671 00:21:49.182 20:01:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:21:49.182 20:01:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:49.182 EAL: No free 2048 kB hugepages reported on node 1 00:21:51.097 20:01:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:21:51.097 20:01:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.097 20:01:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:51.358 20:01:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.358 20:01:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:21:51.358 "tick_rate": 2400000000, 00:21:51.358 "poll_groups": [ 00:21:51.358 { 00:21:51.358 "name": "nvmf_tgt_poll_group_000", 00:21:51.358 "admin_qpairs": 1, 00:21:51.359 "io_qpairs": 1, 00:21:51.359 "current_admin_qpairs": 1, 00:21:51.359 "current_io_qpairs": 1, 00:21:51.359 "pending_bdev_io": 0, 00:21:51.359 "completed_nvme_io": 17800, 00:21:51.359 "transports": [ 00:21:51.359 { 00:21:51.359 "trtype": "TCP" 00:21:51.359 } 00:21:51.359 ] 00:21:51.359 }, 00:21:51.359 { 00:21:51.359 "name": "nvmf_tgt_poll_group_001", 00:21:51.359 "admin_qpairs": 0, 00:21:51.359 "io_qpairs": 1, 00:21:51.359 "current_admin_qpairs": 0, 00:21:51.359 "current_io_qpairs": 1, 00:21:51.359 "pending_bdev_io": 0, 00:21:51.359 "completed_nvme_io": 28459, 00:21:51.359 "transports": [ 00:21:51.359 { 00:21:51.359 "trtype": "TCP" 00:21:51.359 } 00:21:51.359 ] 00:21:51.359 }, 00:21:51.359 { 00:21:51.359 "name": "nvmf_tgt_poll_group_002", 00:21:51.359 "admin_qpairs": 0, 00:21:51.359 "io_qpairs": 1, 00:21:51.359 "current_admin_qpairs": 0, 00:21:51.359 "current_io_qpairs": 1, 00:21:51.359 "pending_bdev_io": 0, 00:21:51.359 "completed_nvme_io": 19626, 00:21:51.359 "transports": [ 00:21:51.359 { 00:21:51.359 "trtype": "TCP" 00:21:51.359 } 00:21:51.359 ] 00:21:51.359 }, 00:21:51.359 { 00:21:51.359 "name": "nvmf_tgt_poll_group_003", 00:21:51.359 "admin_qpairs": 0, 00:21:51.359 "io_qpairs": 1, 00:21:51.359 "current_admin_qpairs": 0, 00:21:51.359 "current_io_qpairs": 1, 00:21:51.359 "pending_bdev_io": 0, 00:21:51.359 "completed_nvme_io": 19299, 00:21:51.359 "transports": [ 00:21:51.359 { 00:21:51.359 "trtype": "TCP" 00:21:51.359 } 00:21:51.359 ] 00:21:51.359 } 00:21:51.359 ] 00:21:51.359 }' 00:21:51.359 20:01:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:51.359 20:01:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:21:51.359 20:01:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:21:51.359 20:01:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:21:51.359 20:01:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3733671 00:21:59.506 Initializing NVMe Controllers 00:21:59.506 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:59.506 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:59.506 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:59.506 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:59.506 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:59.506 Initialization complete. Launching workers. 00:21:59.506 ======================================================== 00:21:59.506 Latency(us) 00:21:59.506 Device Information : IOPS MiB/s Average min max 00:21:59.506 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11154.40 43.57 5737.84 1349.26 9351.83 00:21:59.506 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15543.20 60.72 4117.47 1627.79 8837.49 00:21:59.506 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14016.30 54.75 4565.67 1615.78 10096.69 00:21:59.506 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12491.30 48.79 5123.26 1427.47 10301.42 00:21:59.506 ======================================================== 00:21:59.506 Total : 53205.19 207.83 4811.39 1349.26 10301.42 00:21:59.506 00:21:59.506 20:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:21:59.506 20:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:59.506 20:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:21:59.506 20:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:59.506 20:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:21:59.506 20:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:59.506 20:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:59.506 rmmod nvme_tcp 00:21:59.506 rmmod nvme_fabrics 00:21:59.506 rmmod nvme_keyring 00:21:59.506 20:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:59.506 20:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:21:59.506 20:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:21:59.506 20:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3733387 ']' 00:21:59.506 20:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3733387 00:21:59.506 20:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 3733387 ']' 00:21:59.506 20:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 3733387 00:21:59.506 20:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:21:59.506 20:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:59.506 20:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3733387 00:21:59.506 20:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:59.506 20:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:59.506 20:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3733387' 00:21:59.506 killing process with pid 3733387 00:21:59.506 20:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 3733387 00:21:59.506 20:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 3733387 00:21:59.768 20:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:59.768 20:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:59.768 20:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:59.768 20:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:59.768 20:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:59.768 20:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.768 20:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.768 20:01:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.681 20:01:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:01.681 20:01:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:22:01.681 20:01:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:03.592 20:01:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:05.502 20:01:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:10.790 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:22:10.790 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:10.790 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:10.790 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:10.790 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:10.791 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:10.791 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:10.791 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:10.791 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:10.791 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:10.791 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:10.791 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.533 ms 00:22:10.791 00:22:10.791 --- 10.0.0.2 ping statistics --- 00:22:10.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.791 rtt min/avg/max/mdev = 0.533/0.533/0.533/0.000 ms 00:22:10.792 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:10.792 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:10.792 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.369 ms 00:22:10.792 00:22:10.792 --- 10.0.0.1 ping statistics --- 00:22:10.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.792 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:22:10.792 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:10.792 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:10.792 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:10.792 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:10.792 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:10.792 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:10.792 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:10.792 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:10.792 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:10.792 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:22:10.792 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:10.792 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:10.792 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:10.792 net.core.busy_poll = 1 00:22:10.792 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:10.792 net.core.busy_read = 1 00:22:10.792 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:10.792 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:10.792 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:11.053 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:11.053 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:11.053 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:11.053 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:11.053 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:11.053 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:11.053 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3738227 00:22:11.053 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3738227 00:22:11.053 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:11.053 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 3738227 ']' 00:22:11.053 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.053 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:11.053 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.053 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:11.053 20:01:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:11.053 [2024-07-24 20:01:58.888102] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:22:11.053 [2024-07-24 20:01:58.888158] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:11.053 EAL: No free 2048 kB hugepages reported on node 1 00:22:11.053 [2024-07-24 20:01:58.954867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:11.314 [2024-07-24 20:01:59.020242] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:11.315 [2024-07-24 20:01:59.020282] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:11.315 [2024-07-24 20:01:59.020290] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:11.315 [2024-07-24 20:01:59.020296] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:11.315 [2024-07-24 20:01:59.020302] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:11.315 [2024-07-24 20:01:59.024221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:11.315 [2024-07-24 20:01:59.024382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:11.315 [2024-07-24 20:01:59.024620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:11.315 [2024-07-24 20:01:59.024621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.925 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:11.925 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:22:11.925 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:11.925 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:11.926 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:11.926 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:11.926 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:22:11.926 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:11.926 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:11.926 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.926 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:11.926 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.926 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:11.926 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:11.926 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.926 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:11.926 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.926 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:11.926 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.926 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:12.190 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.190 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:12.190 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.190 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:12.190 [2024-07-24 20:01:59.890546] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:12.190 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.190 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:12.190 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.190 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:12.190 Malloc1 00:22:12.190 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.190 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:12.190 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.190 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:12.190 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.190 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:12.190 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.190 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:12.190 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.190 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:12.190 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.190 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:12.190 [2024-07-24 20:01:59.950001] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:12.190 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.190 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3738578 00:22:12.190 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:22:12.190 20:01:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:12.190 EAL: No free 2048 kB hugepages reported on node 1 00:22:14.105 20:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:22:14.105 20:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.105 20:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:14.105 20:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.105 20:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:22:14.105 "tick_rate": 2400000000, 00:22:14.105 "poll_groups": [ 00:22:14.105 { 00:22:14.105 "name": "nvmf_tgt_poll_group_000", 00:22:14.105 "admin_qpairs": 1, 00:22:14.105 "io_qpairs": 2, 00:22:14.105 "current_admin_qpairs": 1, 00:22:14.105 "current_io_qpairs": 2, 00:22:14.105 "pending_bdev_io": 0, 00:22:14.105 "completed_nvme_io": 31459, 00:22:14.105 "transports": [ 00:22:14.105 { 00:22:14.105 "trtype": "TCP" 00:22:14.105 } 00:22:14.105 ] 00:22:14.105 }, 00:22:14.105 { 00:22:14.105 "name": "nvmf_tgt_poll_group_001", 00:22:14.105 "admin_qpairs": 0, 00:22:14.105 "io_qpairs": 2, 00:22:14.105 "current_admin_qpairs": 0, 00:22:14.105 "current_io_qpairs": 2, 00:22:14.105 "pending_bdev_io": 0, 00:22:14.105 "completed_nvme_io": 41174, 00:22:14.105 "transports": [ 00:22:14.105 { 00:22:14.105 "trtype": "TCP" 00:22:14.105 } 00:22:14.105 ] 00:22:14.105 }, 00:22:14.105 { 00:22:14.105 "name": "nvmf_tgt_poll_group_002", 00:22:14.105 "admin_qpairs": 0, 00:22:14.105 "io_qpairs": 0, 00:22:14.105 "current_admin_qpairs": 0, 00:22:14.105 "current_io_qpairs": 0, 00:22:14.105 "pending_bdev_io": 0, 00:22:14.105 "completed_nvme_io": 0, 00:22:14.105 "transports": [ 00:22:14.105 { 00:22:14.105 "trtype": "TCP" 00:22:14.105 } 00:22:14.105 ] 00:22:14.105 }, 00:22:14.105 { 00:22:14.105 "name": "nvmf_tgt_poll_group_003", 00:22:14.105 "admin_qpairs": 0, 00:22:14.105 "io_qpairs": 0, 00:22:14.105 "current_admin_qpairs": 0, 00:22:14.105 "current_io_qpairs": 0, 00:22:14.105 "pending_bdev_io": 0, 00:22:14.105 "completed_nvme_io": 0, 00:22:14.105 "transports": [ 00:22:14.105 { 00:22:14.105 "trtype": "TCP" 00:22:14.105 } 00:22:14.105 ] 00:22:14.105 } 00:22:14.105 ] 00:22:14.105 }' 00:22:14.105 20:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:14.105 20:02:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:22:14.105 20:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:22:14.105 20:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:22:14.106 20:02:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3738578 00:22:22.246 Initializing NVMe Controllers 00:22:22.246 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:22.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:22.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:22.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:22.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:22.246 Initialization complete. Launching workers. 00:22:22.246 ======================================================== 00:22:22.246 Latency(us) 00:22:22.246 Device Information : IOPS MiB/s Average min max 00:22:22.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9921.70 38.76 6451.50 1209.73 51833.14 00:22:22.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9774.00 38.18 6548.82 1450.58 49874.78 00:22:22.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10454.60 40.84 6122.57 1220.52 49699.41 00:22:22.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10621.70 41.49 6044.08 1001.23 52459.46 00:22:22.246 ======================================================== 00:22:22.246 Total : 40771.99 159.27 6284.35 1001.23 52459.46 00:22:22.246 00:22:22.246 20:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:22:22.246 20:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:22.246 20:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:22.246 20:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:22.246 20:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:22.246 20:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:22.246 20:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:22.246 rmmod nvme_tcp 00:22:22.246 rmmod nvme_fabrics 00:22:22.246 rmmod nvme_keyring 00:22:22.246 20:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:22.246 20:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:22.246 20:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:22.246 20:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3738227 ']' 00:22:22.246 20:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3738227 00:22:22.246 20:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 3738227 ']' 00:22:22.246 20:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 3738227 00:22:22.246 20:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:22:22.246 20:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:22.246 20:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3738227 00:22:22.507 20:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:22.507 20:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:22.507 20:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3738227' 00:22:22.507 killing process with pid 3738227 00:22:22.507 20:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 3738227 00:22:22.507 20:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 3738227 00:22:22.507 20:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:22.507 20:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:22.507 20:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:22.507 20:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:22.507 20:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:22.507 20:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.507 20:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:22.507 20:02:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:22:25.808 00:22:25.808 real 0m53.207s 00:22:25.808 user 2m47.652s 00:22:25.808 sys 0m11.813s 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:25.808 ************************************ 00:22:25.808 END TEST nvmf_perf_adq 00:22:25.808 ************************************ 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:25.808 ************************************ 00:22:25.808 START TEST nvmf_shutdown 00:22:25.808 ************************************ 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:25.808 * Looking for test storage... 00:22:25.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:25.808 ************************************ 00:22:25.808 START TEST nvmf_shutdown_tc1 00:22:25.808 ************************************ 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:25.808 20:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:34.014 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:34.014 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:34.014 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:34.014 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:34.014 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:34.014 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:34.014 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:34.014 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:22:34.014 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:34.014 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:22:34.014 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:22:34.014 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:22:34.014 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:22:34.014 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:22:34.014 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:34.014 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:34.014 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:34.014 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:34.014 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:34.015 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:34.015 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:34.015 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:34.015 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:34.015 20:02:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:34.015 20:02:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:34.015 20:02:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:34.015 20:02:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:34.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:34.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:22:34.015 00:22:34.015 --- 10.0.0.2 ping statistics --- 00:22:34.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.015 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:22:34.015 20:02:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:34.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:34.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.391 ms 00:22:34.015 00:22:34.015 --- 10.0.0.1 ping statistics --- 00:22:34.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.015 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:22:34.015 20:02:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:34.015 20:02:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:22:34.015 20:02:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:34.015 20:02:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:34.015 20:02:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:34.015 20:02:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:34.015 20:02:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:34.015 20:02:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:34.015 20:02:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:34.015 20:02:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:34.015 20:02:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:34.015 20:02:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:34.015 20:02:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:34.015 20:02:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3745035 00:22:34.016 20:02:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3745035 00:22:34.016 20:02:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:34.016 20:02:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 3745035 ']' 00:22:34.016 20:02:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.016 20:02:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:34.016 20:02:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.016 20:02:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:34.016 20:02:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:34.016 [2024-07-24 20:02:21.199710] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:22:34.016 [2024-07-24 20:02:21.199775] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:34.016 EAL: No free 2048 kB hugepages reported on node 1 00:22:34.016 [2024-07-24 20:02:21.288323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:34.016 [2024-07-24 20:02:21.383026] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:34.016 [2024-07-24 20:02:21.383085] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:34.016 [2024-07-24 20:02:21.383093] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:34.016 [2024-07-24 20:02:21.383100] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:34.016 [2024-07-24 20:02:21.383106] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:34.016 [2024-07-24 20:02:21.383270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:34.016 [2024-07-24 20:02:21.383486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:34.016 [2024-07-24 20:02:21.383652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:34.016 [2024-07-24 20:02:21.383653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.277 20:02:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:34.277 20:02:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:22:34.277 20:02:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:34.277 20:02:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:34.277 20:02:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:34.277 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:34.277 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:34.277 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.277 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:34.277 [2024-07-24 20:02:22.035112] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:34.277 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.277 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:34.277 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:34.277 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:34.277 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:34.277 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:34.277 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:34.277 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:34.277 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:34.277 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:34.277 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:34.277 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:34.277 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:34.277 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:34.277 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:34.277 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:34.277 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:34.277 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:34.277 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:34.277 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:34.278 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:34.278 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:34.278 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:34.278 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:34.278 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:34.278 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:34.278 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:34.278 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.278 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:34.278 Malloc1 00:22:34.278 [2024-07-24 20:02:22.138516] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:34.278 Malloc2 00:22:34.278 Malloc3 00:22:34.539 Malloc4 00:22:34.539 Malloc5 00:22:34.539 Malloc6 00:22:34.539 Malloc7 00:22:34.539 Malloc8 00:22:34.539 Malloc9 00:22:34.539 Malloc10 00:22:34.801 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.801 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:34.801 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:34.801 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:34.801 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3745421 00:22:34.801 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3745421 /var/tmp/bdevperf.sock 00:22:34.801 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 3745421 ']' 00:22:34.801 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:34.801 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:34.801 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:34.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:34.801 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:34.801 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:34.801 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:34.801 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:34.801 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:34.801 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:34.801 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:34.801 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:34.801 { 00:22:34.801 "params": { 00:22:34.801 "name": "Nvme$subsystem", 00:22:34.801 "trtype": "$TEST_TRANSPORT", 00:22:34.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:34.801 "adrfam": "ipv4", 00:22:34.801 "trsvcid": "$NVMF_PORT", 00:22:34.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:34.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:34.801 "hdgst": ${hdgst:-false}, 00:22:34.801 "ddgst": ${ddgst:-false} 00:22:34.801 }, 00:22:34.801 "method": "bdev_nvme_attach_controller" 00:22:34.801 } 00:22:34.801 EOF 00:22:34.801 )") 00:22:34.801 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:34.801 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:34.801 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:34.801 { 00:22:34.801 "params": { 00:22:34.801 "name": "Nvme$subsystem", 00:22:34.801 "trtype": "$TEST_TRANSPORT", 00:22:34.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:34.801 "adrfam": "ipv4", 00:22:34.801 "trsvcid": "$NVMF_PORT", 00:22:34.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:34.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:34.801 "hdgst": ${hdgst:-false}, 00:22:34.801 "ddgst": ${ddgst:-false} 00:22:34.801 }, 00:22:34.801 "method": "bdev_nvme_attach_controller" 00:22:34.801 } 00:22:34.801 EOF 00:22:34.801 )") 00:22:34.801 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:34.801 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:34.801 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:34.801 { 00:22:34.801 "params": { 00:22:34.801 "name": "Nvme$subsystem", 00:22:34.801 "trtype": "$TEST_TRANSPORT", 00:22:34.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:34.801 "adrfam": "ipv4", 00:22:34.801 "trsvcid": "$NVMF_PORT", 00:22:34.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:34.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:34.801 "hdgst": ${hdgst:-false}, 00:22:34.801 "ddgst": ${ddgst:-false} 00:22:34.801 }, 00:22:34.801 "method": "bdev_nvme_attach_controller" 00:22:34.801 } 00:22:34.801 EOF 00:22:34.801 )") 00:22:34.801 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:34.801 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:34.801 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:34.801 { 00:22:34.801 "params": { 00:22:34.801 "name": "Nvme$subsystem", 00:22:34.801 "trtype": "$TEST_TRANSPORT", 00:22:34.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:34.801 "adrfam": "ipv4", 00:22:34.801 "trsvcid": "$NVMF_PORT", 00:22:34.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:34.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:34.801 "hdgst": ${hdgst:-false}, 00:22:34.801 "ddgst": ${ddgst:-false} 00:22:34.801 }, 00:22:34.801 "method": "bdev_nvme_attach_controller" 00:22:34.801 } 00:22:34.801 EOF 00:22:34.801 )") 00:22:34.801 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:34.801 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:34.801 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:34.801 { 00:22:34.801 "params": { 00:22:34.801 "name": "Nvme$subsystem", 00:22:34.801 "trtype": "$TEST_TRANSPORT", 00:22:34.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:34.801 "adrfam": "ipv4", 00:22:34.801 "trsvcid": "$NVMF_PORT", 00:22:34.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:34.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:34.801 "hdgst": ${hdgst:-false}, 00:22:34.801 "ddgst": ${ddgst:-false} 00:22:34.801 }, 00:22:34.801 "method": "bdev_nvme_attach_controller" 00:22:34.801 } 00:22:34.801 EOF 00:22:34.801 )") 00:22:34.801 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:34.801 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:34.801 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:34.801 { 00:22:34.801 "params": { 00:22:34.801 "name": "Nvme$subsystem", 00:22:34.801 "trtype": "$TEST_TRANSPORT", 00:22:34.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:34.801 "adrfam": "ipv4", 00:22:34.801 "trsvcid": "$NVMF_PORT", 00:22:34.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:34.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:34.801 "hdgst": ${hdgst:-false}, 00:22:34.801 "ddgst": ${ddgst:-false} 00:22:34.801 }, 00:22:34.801 "method": "bdev_nvme_attach_controller" 00:22:34.801 } 00:22:34.801 EOF 00:22:34.801 )") 00:22:34.801 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:34.801 [2024-07-24 20:02:22.591272] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:22:34.801 [2024-07-24 20:02:22.591324] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:34.801 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:34.801 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:34.801 { 00:22:34.801 "params": { 00:22:34.801 "name": "Nvme$subsystem", 00:22:34.801 "trtype": "$TEST_TRANSPORT", 00:22:34.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:34.801 "adrfam": "ipv4", 00:22:34.801 "trsvcid": "$NVMF_PORT", 00:22:34.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:34.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:34.802 "hdgst": ${hdgst:-false}, 00:22:34.802 "ddgst": ${ddgst:-false} 00:22:34.802 }, 00:22:34.802 "method": "bdev_nvme_attach_controller" 00:22:34.802 } 00:22:34.802 EOF 00:22:34.802 )") 00:22:34.802 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:34.802 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:34.802 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:34.802 { 00:22:34.802 "params": { 00:22:34.802 "name": "Nvme$subsystem", 00:22:34.802 "trtype": "$TEST_TRANSPORT", 00:22:34.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:34.802 "adrfam": "ipv4", 00:22:34.802 "trsvcid": "$NVMF_PORT", 00:22:34.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:34.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:34.802 "hdgst": ${hdgst:-false}, 00:22:34.802 "ddgst": ${ddgst:-false} 00:22:34.802 }, 00:22:34.802 "method": "bdev_nvme_attach_controller" 00:22:34.802 } 00:22:34.802 EOF 00:22:34.802 )") 00:22:34.802 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:34.802 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:34.802 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:34.802 { 00:22:34.802 "params": { 00:22:34.802 "name": "Nvme$subsystem", 00:22:34.802 "trtype": "$TEST_TRANSPORT", 00:22:34.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:34.802 "adrfam": "ipv4", 00:22:34.802 "trsvcid": "$NVMF_PORT", 00:22:34.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:34.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:34.802 "hdgst": ${hdgst:-false}, 00:22:34.802 "ddgst": ${ddgst:-false} 00:22:34.802 }, 00:22:34.802 "method": "bdev_nvme_attach_controller" 00:22:34.802 } 00:22:34.802 EOF 00:22:34.802 )") 00:22:34.802 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:34.802 EAL: No free 2048 kB hugepages reported on node 1 00:22:34.802 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:34.802 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:34.802 { 00:22:34.802 "params": { 00:22:34.802 "name": "Nvme$subsystem", 00:22:34.802 "trtype": "$TEST_TRANSPORT", 00:22:34.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:34.802 "adrfam": "ipv4", 00:22:34.802 "trsvcid": "$NVMF_PORT", 00:22:34.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:34.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:34.802 "hdgst": ${hdgst:-false}, 00:22:34.802 "ddgst": ${ddgst:-false} 00:22:34.802 }, 00:22:34.802 "method": "bdev_nvme_attach_controller" 00:22:34.802 } 00:22:34.802 EOF 00:22:34.802 )") 00:22:34.802 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:34.802 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:34.802 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:34.802 20:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:34.802 "params": { 00:22:34.802 "name": "Nvme1", 00:22:34.802 "trtype": "tcp", 00:22:34.802 "traddr": "10.0.0.2", 00:22:34.802 "adrfam": "ipv4", 00:22:34.802 "trsvcid": "4420", 00:22:34.802 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:34.802 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:34.802 "hdgst": false, 00:22:34.802 "ddgst": false 00:22:34.802 }, 00:22:34.802 "method": "bdev_nvme_attach_controller" 00:22:34.802 },{ 00:22:34.802 "params": { 00:22:34.802 "name": "Nvme2", 00:22:34.802 "trtype": "tcp", 00:22:34.802 "traddr": "10.0.0.2", 00:22:34.802 "adrfam": "ipv4", 00:22:34.802 "trsvcid": "4420", 00:22:34.802 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:34.802 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:34.802 "hdgst": false, 00:22:34.802 "ddgst": false 00:22:34.802 }, 00:22:34.802 "method": "bdev_nvme_attach_controller" 00:22:34.802 },{ 00:22:34.802 "params": { 00:22:34.802 "name": "Nvme3", 00:22:34.802 "trtype": "tcp", 00:22:34.802 "traddr": "10.0.0.2", 00:22:34.802 "adrfam": "ipv4", 00:22:34.802 "trsvcid": "4420", 00:22:34.802 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:34.802 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:34.802 "hdgst": false, 00:22:34.802 "ddgst": false 00:22:34.802 }, 00:22:34.802 "method": "bdev_nvme_attach_controller" 00:22:34.802 },{ 00:22:34.802 "params": { 00:22:34.802 "name": "Nvme4", 00:22:34.802 "trtype": "tcp", 00:22:34.802 "traddr": "10.0.0.2", 00:22:34.802 "adrfam": "ipv4", 00:22:34.802 "trsvcid": "4420", 00:22:34.802 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:34.802 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:34.802 "hdgst": false, 00:22:34.802 "ddgst": false 00:22:34.802 }, 00:22:34.802 "method": "bdev_nvme_attach_controller" 00:22:34.802 },{ 00:22:34.802 "params": { 00:22:34.802 "name": "Nvme5", 00:22:34.802 "trtype": "tcp", 00:22:34.802 "traddr": "10.0.0.2", 00:22:34.802 "adrfam": "ipv4", 00:22:34.802 "trsvcid": "4420", 00:22:34.802 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:34.802 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:34.802 "hdgst": false, 00:22:34.802 "ddgst": false 00:22:34.802 }, 00:22:34.802 "method": "bdev_nvme_attach_controller" 00:22:34.802 },{ 00:22:34.802 "params": { 00:22:34.802 "name": "Nvme6", 00:22:34.802 "trtype": "tcp", 00:22:34.802 "traddr": "10.0.0.2", 00:22:34.802 "adrfam": "ipv4", 00:22:34.802 "trsvcid": "4420", 00:22:34.802 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:34.802 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:34.802 "hdgst": false, 00:22:34.802 "ddgst": false 00:22:34.802 }, 00:22:34.802 "method": "bdev_nvme_attach_controller" 00:22:34.802 },{ 00:22:34.802 "params": { 00:22:34.802 "name": "Nvme7", 00:22:34.802 "trtype": "tcp", 00:22:34.802 "traddr": "10.0.0.2", 00:22:34.802 "adrfam": "ipv4", 00:22:34.802 "trsvcid": "4420", 00:22:34.802 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:34.802 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:34.802 "hdgst": false, 00:22:34.802 "ddgst": false 00:22:34.802 }, 00:22:34.802 "method": "bdev_nvme_attach_controller" 00:22:34.802 },{ 00:22:34.802 "params": { 00:22:34.802 "name": "Nvme8", 00:22:34.802 "trtype": "tcp", 00:22:34.802 "traddr": "10.0.0.2", 00:22:34.802 "adrfam": "ipv4", 00:22:34.802 "trsvcid": "4420", 00:22:34.802 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:34.802 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:34.802 "hdgst": false, 00:22:34.802 "ddgst": false 00:22:34.802 }, 00:22:34.802 "method": "bdev_nvme_attach_controller" 00:22:34.802 },{ 00:22:34.802 "params": { 00:22:34.802 "name": "Nvme9", 00:22:34.802 "trtype": "tcp", 00:22:34.802 "traddr": "10.0.0.2", 00:22:34.802 "adrfam": "ipv4", 00:22:34.802 "trsvcid": "4420", 00:22:34.802 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:34.802 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:34.802 "hdgst": false, 00:22:34.802 "ddgst": false 00:22:34.802 }, 00:22:34.802 "method": "bdev_nvme_attach_controller" 00:22:34.802 },{ 00:22:34.802 "params": { 00:22:34.802 "name": "Nvme10", 00:22:34.802 "trtype": "tcp", 00:22:34.802 "traddr": "10.0.0.2", 00:22:34.802 "adrfam": "ipv4", 00:22:34.802 "trsvcid": "4420", 00:22:34.802 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:34.802 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:34.802 "hdgst": false, 00:22:34.802 "ddgst": false 00:22:34.802 }, 00:22:34.802 "method": "bdev_nvme_attach_controller" 00:22:34.802 }' 00:22:34.802 [2024-07-24 20:02:22.651639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.802 [2024-07-24 20:02:22.716618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.186 20:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:36.186 20:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:22:36.186 20:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:36.186 20:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.186 20:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:36.186 20:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.186 20:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3745421 00:22:36.186 20:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:22:36.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3745421 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:36.186 20:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:22:37.128 20:02:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3745035 00:22:37.128 20:02:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:37.128 20:02:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:37.128 20:02:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:37.128 20:02:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:37.128 20:02:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:37.128 20:02:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:37.128 { 00:22:37.128 "params": { 00:22:37.128 "name": "Nvme$subsystem", 00:22:37.128 "trtype": "$TEST_TRANSPORT", 00:22:37.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:37.128 "adrfam": "ipv4", 00:22:37.128 "trsvcid": "$NVMF_PORT", 00:22:37.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:37.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:37.128 "hdgst": ${hdgst:-false}, 00:22:37.128 "ddgst": ${ddgst:-false} 00:22:37.128 }, 00:22:37.128 "method": "bdev_nvme_attach_controller" 00:22:37.128 } 00:22:37.128 EOF 00:22:37.128 )") 00:22:37.128 20:02:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:37.389 20:02:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:37.389 20:02:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:37.389 { 00:22:37.389 "params": { 00:22:37.389 "name": "Nvme$subsystem", 00:22:37.389 "trtype": "$TEST_TRANSPORT", 00:22:37.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:37.389 "adrfam": "ipv4", 00:22:37.389 "trsvcid": "$NVMF_PORT", 00:22:37.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:37.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:37.389 "hdgst": ${hdgst:-false}, 00:22:37.389 "ddgst": ${ddgst:-false} 00:22:37.389 }, 00:22:37.389 "method": "bdev_nvme_attach_controller" 00:22:37.389 } 00:22:37.389 EOF 00:22:37.389 )") 00:22:37.389 20:02:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:37.389 20:02:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:37.389 20:02:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:37.389 { 00:22:37.389 "params": { 00:22:37.389 "name": "Nvme$subsystem", 00:22:37.389 "trtype": "$TEST_TRANSPORT", 00:22:37.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:37.389 "adrfam": "ipv4", 00:22:37.389 "trsvcid": "$NVMF_PORT", 00:22:37.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:37.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:37.390 "hdgst": ${hdgst:-false}, 00:22:37.390 "ddgst": ${ddgst:-false} 00:22:37.390 }, 00:22:37.390 "method": "bdev_nvme_attach_controller" 00:22:37.390 } 00:22:37.390 EOF 00:22:37.390 )") 00:22:37.390 20:02:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:37.390 20:02:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:37.390 20:02:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:37.390 { 00:22:37.390 "params": { 00:22:37.390 "name": "Nvme$subsystem", 00:22:37.390 "trtype": "$TEST_TRANSPORT", 00:22:37.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:37.390 "adrfam": "ipv4", 00:22:37.390 "trsvcid": "$NVMF_PORT", 00:22:37.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:37.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:37.390 "hdgst": ${hdgst:-false}, 00:22:37.390 "ddgst": ${ddgst:-false} 00:22:37.390 }, 00:22:37.390 "method": "bdev_nvme_attach_controller" 00:22:37.390 } 00:22:37.390 EOF 00:22:37.390 )") 00:22:37.390 20:02:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:37.390 20:02:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:37.390 20:02:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:37.390 { 00:22:37.390 "params": { 00:22:37.390 "name": "Nvme$subsystem", 00:22:37.390 "trtype": "$TEST_TRANSPORT", 00:22:37.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:37.390 "adrfam": "ipv4", 00:22:37.390 "trsvcid": "$NVMF_PORT", 00:22:37.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:37.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:37.390 "hdgst": ${hdgst:-false}, 00:22:37.390 "ddgst": ${ddgst:-false} 00:22:37.390 }, 00:22:37.390 "method": "bdev_nvme_attach_controller" 00:22:37.390 } 00:22:37.390 EOF 00:22:37.390 )") 00:22:37.390 20:02:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:37.390 20:02:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:37.390 20:02:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:37.390 { 00:22:37.390 "params": { 00:22:37.390 "name": "Nvme$subsystem", 00:22:37.390 "trtype": "$TEST_TRANSPORT", 00:22:37.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:37.390 "adrfam": "ipv4", 00:22:37.390 "trsvcid": "$NVMF_PORT", 00:22:37.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:37.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:37.390 "hdgst": ${hdgst:-false}, 00:22:37.390 "ddgst": ${ddgst:-false} 00:22:37.390 }, 00:22:37.390 "method": "bdev_nvme_attach_controller" 00:22:37.390 } 00:22:37.390 EOF 00:22:37.390 )") 00:22:37.390 20:02:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:37.390 [2024-07-24 20:02:25.123389] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:22:37.390 [2024-07-24 20:02:25.123447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3745792 ] 00:22:37.390 20:02:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:37.390 20:02:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:37.390 { 00:22:37.390 "params": { 00:22:37.390 "name": "Nvme$subsystem", 00:22:37.390 "trtype": "$TEST_TRANSPORT", 00:22:37.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:37.390 "adrfam": "ipv4", 00:22:37.390 "trsvcid": "$NVMF_PORT", 00:22:37.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:37.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:37.390 "hdgst": ${hdgst:-false}, 00:22:37.390 "ddgst": ${ddgst:-false} 00:22:37.390 }, 00:22:37.390 "method": "bdev_nvme_attach_controller" 00:22:37.390 } 00:22:37.390 EOF 00:22:37.390 )") 00:22:37.390 20:02:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:37.390 20:02:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:37.390 20:02:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:37.390 { 00:22:37.390 "params": { 00:22:37.390 "name": "Nvme$subsystem", 00:22:37.390 "trtype": "$TEST_TRANSPORT", 00:22:37.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:37.390 "adrfam": "ipv4", 00:22:37.390 "trsvcid": "$NVMF_PORT", 00:22:37.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:37.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:37.390 "hdgst": ${hdgst:-false}, 00:22:37.390 "ddgst": ${ddgst:-false} 00:22:37.390 }, 00:22:37.390 "method": "bdev_nvme_attach_controller" 00:22:37.390 } 00:22:37.390 EOF 00:22:37.390 )") 00:22:37.390 20:02:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:37.390 20:02:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:37.390 20:02:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:37.390 { 00:22:37.390 "params": { 00:22:37.390 "name": "Nvme$subsystem", 00:22:37.390 "trtype": "$TEST_TRANSPORT", 00:22:37.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:37.390 "adrfam": "ipv4", 00:22:37.390 "trsvcid": "$NVMF_PORT", 00:22:37.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:37.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:37.390 "hdgst": ${hdgst:-false}, 00:22:37.390 "ddgst": ${ddgst:-false} 00:22:37.390 }, 00:22:37.390 "method": "bdev_nvme_attach_controller" 00:22:37.390 } 00:22:37.390 EOF 00:22:37.390 )") 00:22:37.390 20:02:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:37.390 20:02:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:37.390 20:02:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:37.390 { 00:22:37.390 "params": { 00:22:37.390 "name": "Nvme$subsystem", 00:22:37.390 "trtype": "$TEST_TRANSPORT", 00:22:37.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:37.390 "adrfam": "ipv4", 00:22:37.390 "trsvcid": "$NVMF_PORT", 00:22:37.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:37.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:37.390 "hdgst": ${hdgst:-false}, 00:22:37.390 "ddgst": ${ddgst:-false} 00:22:37.390 }, 00:22:37.390 "method": "bdev_nvme_attach_controller" 00:22:37.390 } 00:22:37.390 EOF 00:22:37.390 )") 00:22:37.390 20:02:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:37.390 EAL: No free 2048 kB hugepages reported on node 1 00:22:37.390 20:02:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:37.390 20:02:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:37.390 20:02:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:37.390 "params": { 00:22:37.390 "name": "Nvme1", 00:22:37.390 "trtype": "tcp", 00:22:37.390 "traddr": "10.0.0.2", 00:22:37.390 "adrfam": "ipv4", 00:22:37.390 "trsvcid": "4420", 00:22:37.390 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:37.390 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:37.390 "hdgst": false, 00:22:37.390 "ddgst": false 00:22:37.390 }, 00:22:37.390 "method": "bdev_nvme_attach_controller" 00:22:37.390 },{ 00:22:37.390 "params": { 00:22:37.390 "name": "Nvme2", 00:22:37.390 "trtype": "tcp", 00:22:37.391 "traddr": "10.0.0.2", 00:22:37.391 "adrfam": "ipv4", 00:22:37.391 "trsvcid": "4420", 00:22:37.391 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:37.391 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:37.391 "hdgst": false, 00:22:37.391 "ddgst": false 00:22:37.391 }, 00:22:37.391 "method": "bdev_nvme_attach_controller" 00:22:37.391 },{ 00:22:37.391 "params": { 00:22:37.391 "name": "Nvme3", 00:22:37.391 "trtype": "tcp", 00:22:37.391 "traddr": "10.0.0.2", 00:22:37.391 "adrfam": "ipv4", 00:22:37.391 "trsvcid": "4420", 00:22:37.391 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:37.391 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:37.391 "hdgst": false, 00:22:37.391 "ddgst": false 00:22:37.391 }, 00:22:37.391 "method": "bdev_nvme_attach_controller" 00:22:37.391 },{ 00:22:37.391 "params": { 00:22:37.391 "name": "Nvme4", 00:22:37.391 "trtype": "tcp", 00:22:37.391 "traddr": "10.0.0.2", 00:22:37.391 "adrfam": "ipv4", 00:22:37.391 "trsvcid": "4420", 00:22:37.391 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:37.391 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:37.391 "hdgst": false, 00:22:37.391 "ddgst": false 00:22:37.391 }, 00:22:37.391 "method": "bdev_nvme_attach_controller" 00:22:37.391 },{ 00:22:37.391 "params": { 00:22:37.391 "name": "Nvme5", 00:22:37.391 "trtype": "tcp", 00:22:37.391 "traddr": "10.0.0.2", 00:22:37.391 "adrfam": "ipv4", 00:22:37.391 "trsvcid": "4420", 00:22:37.391 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:37.391 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:37.391 "hdgst": false, 00:22:37.391 "ddgst": false 00:22:37.391 }, 00:22:37.391 "method": "bdev_nvme_attach_controller" 00:22:37.391 },{ 00:22:37.391 "params": { 00:22:37.391 "name": "Nvme6", 00:22:37.391 "trtype": "tcp", 00:22:37.391 "traddr": "10.0.0.2", 00:22:37.391 "adrfam": "ipv4", 00:22:37.391 "trsvcid": "4420", 00:22:37.391 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:37.391 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:37.391 "hdgst": false, 00:22:37.391 "ddgst": false 00:22:37.391 }, 00:22:37.391 "method": "bdev_nvme_attach_controller" 00:22:37.391 },{ 00:22:37.391 "params": { 00:22:37.391 "name": "Nvme7", 00:22:37.391 "trtype": "tcp", 00:22:37.391 "traddr": "10.0.0.2", 00:22:37.391 "adrfam": "ipv4", 00:22:37.391 "trsvcid": "4420", 00:22:37.391 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:37.391 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:37.391 "hdgst": false, 00:22:37.391 "ddgst": false 00:22:37.391 }, 00:22:37.391 "method": "bdev_nvme_attach_controller" 00:22:37.391 },{ 00:22:37.391 "params": { 00:22:37.391 "name": "Nvme8", 00:22:37.391 "trtype": "tcp", 00:22:37.391 "traddr": "10.0.0.2", 00:22:37.391 "adrfam": "ipv4", 00:22:37.391 "trsvcid": "4420", 00:22:37.391 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:37.391 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:37.391 "hdgst": false, 00:22:37.391 "ddgst": false 00:22:37.391 }, 00:22:37.391 "method": "bdev_nvme_attach_controller" 00:22:37.391 },{ 00:22:37.391 "params": { 00:22:37.391 "name": "Nvme9", 00:22:37.391 "trtype": "tcp", 00:22:37.391 "traddr": "10.0.0.2", 00:22:37.391 "adrfam": "ipv4", 00:22:37.391 "trsvcid": "4420", 00:22:37.391 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:37.391 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:37.391 "hdgst": false, 00:22:37.391 "ddgst": false 00:22:37.391 }, 00:22:37.391 "method": "bdev_nvme_attach_controller" 00:22:37.391 },{ 00:22:37.391 "params": { 00:22:37.391 "name": "Nvme10", 00:22:37.391 "trtype": "tcp", 00:22:37.391 "traddr": "10.0.0.2", 00:22:37.391 "adrfam": "ipv4", 00:22:37.391 "trsvcid": "4420", 00:22:37.391 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:37.391 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:37.391 "hdgst": false, 00:22:37.391 "ddgst": false 00:22:37.391 }, 00:22:37.391 "method": "bdev_nvme_attach_controller" 00:22:37.391 }' 00:22:37.391 [2024-07-24 20:02:25.184660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.391 [2024-07-24 20:02:25.249815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.781 Running I/O for 1 seconds... 00:22:40.169 00:22:40.169 Latency(us) 00:22:40.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.169 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:40.169 Verification LBA range: start 0x0 length 0x400 00:22:40.169 Nvme1n1 : 1.09 234.10 14.63 0.00 0.00 265376.85 23374.51 251658.24 00:22:40.169 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:40.169 Verification LBA range: start 0x0 length 0x400 00:22:40.169 Nvme2n1 : 1.07 179.92 11.25 0.00 0.00 345790.29 26432.85 272629.76 00:22:40.169 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:40.169 Verification LBA range: start 0x0 length 0x400 00:22:40.169 Nvme3n1 : 1.15 222.56 13.91 0.00 0.00 275232.85 42161.49 249910.61 00:22:40.169 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:40.169 Verification LBA range: start 0x0 length 0x400 00:22:40.169 Nvme4n1 : 1.16 220.71 13.79 0.00 0.00 272878.08 22609.92 272629.76 00:22:40.169 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:40.169 Verification LBA range: start 0x0 length 0x400 00:22:40.169 Nvme5n1 : 1.11 288.42 18.03 0.00 0.00 204246.19 22937.60 223696.21 00:22:40.169 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:40.169 Verification LBA range: start 0x0 length 0x400 00:22:40.169 Nvme6n1 : 1.17 274.62 17.16 0.00 0.00 211695.27 22500.69 249910.61 00:22:40.169 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:40.169 Verification LBA range: start 0x0 length 0x400 00:22:40.169 Nvme7n1 : 1.16 221.46 13.84 0.00 0.00 257631.57 24794.45 269134.51 00:22:40.169 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:40.169 Verification LBA range: start 0x0 length 0x400 00:22:40.169 Nvme8n1 : 1.15 223.34 13.96 0.00 0.00 250396.16 23265.28 256901.12 00:22:40.169 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:40.169 Verification LBA range: start 0x0 length 0x400 00:22:40.169 Nvme9n1 : 1.19 269.76 16.86 0.00 0.00 204580.18 17585.49 246415.36 00:22:40.169 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:40.169 Verification LBA range: start 0x0 length 0x400 00:22:40.169 Nvme10n1 : 1.18 216.98 13.56 0.00 0.00 249509.76 17585.49 284863.15 00:22:40.169 =================================================================================================================== 00:22:40.169 Total : 2351.87 146.99 0.00 0.00 248192.39 17585.49 284863.15 00:22:40.169 20:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:22:40.169 20:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:40.169 20:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:40.169 20:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:40.169 20:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:40.169 20:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:40.169 20:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:22:40.169 20:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:40.169 20:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:22:40.169 20:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:40.169 20:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:40.169 rmmod nvme_tcp 00:22:40.169 rmmod nvme_fabrics 00:22:40.169 rmmod nvme_keyring 00:22:40.170 20:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:40.170 20:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:22:40.170 20:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:22:40.170 20:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3745035 ']' 00:22:40.170 20:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3745035 00:22:40.170 20:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 3745035 ']' 00:22:40.170 20:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 3745035 00:22:40.170 20:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:22:40.170 20:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:40.170 20:02:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3745035 00:22:40.170 20:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:40.170 20:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:40.170 20:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3745035' 00:22:40.170 killing process with pid 3745035 00:22:40.170 20:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 3745035 00:22:40.170 20:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 3745035 00:22:40.431 20:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:40.431 20:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:40.431 20:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:40.431 20:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:40.431 20:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:40.431 20:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.431 20:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:40.431 20:02:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:42.980 00:22:42.980 real 0m16.588s 00:22:42.980 user 0m33.335s 00:22:42.980 sys 0m6.735s 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:42.980 ************************************ 00:22:42.980 END TEST nvmf_shutdown_tc1 00:22:42.980 ************************************ 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:42.980 ************************************ 00:22:42.980 START TEST nvmf_shutdown_tc2 00:22:42.980 ************************************ 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:42.980 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:42.980 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.980 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:42.981 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:42.981 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:42.981 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:42.981 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.776 ms 00:22:42.981 00:22:42.981 --- 10.0.0.2 ping statistics --- 00:22:42.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.981 rtt min/avg/max/mdev = 0.776/0.776/0.776/0.000 ms 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:42.981 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:42.981 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:22:42.981 00:22:42.981 --- 10.0.0.1 ping statistics --- 00:22:42.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.981 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3747086 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3747086 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3747086 ']' 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:42.981 20:02:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:42.981 [2024-07-24 20:02:30.855796] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:22:42.981 [2024-07-24 20:02:30.855854] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:42.981 EAL: No free 2048 kB hugepages reported on node 1 00:22:43.242 [2024-07-24 20:02:30.933420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:43.242 [2024-07-24 20:02:30.989729] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:43.242 [2024-07-24 20:02:30.989763] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:43.242 [2024-07-24 20:02:30.989768] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:43.242 [2024-07-24 20:02:30.989773] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:43.242 [2024-07-24 20:02:30.989777] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:43.242 [2024-07-24 20:02:30.989891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:43.242 [2024-07-24 20:02:30.990052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:43.242 [2024-07-24 20:02:30.990228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.242 [2024-07-24 20:02:30.990230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:43.814 20:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:43.814 20:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:22:43.814 20:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:43.814 20:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:43.814 20:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:43.814 20:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:43.814 20:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:43.814 20:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.814 20:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:43.814 [2024-07-24 20:02:31.677903] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:43.814 20:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.814 20:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:43.814 20:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:43.814 20:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:43.814 20:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:43.814 20:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:43.814 20:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:43.814 20:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:43.814 20:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:43.814 20:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:43.814 20:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:43.814 20:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:43.814 20:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:43.814 20:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:43.814 20:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:43.814 20:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:43.814 20:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:43.814 20:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:43.814 20:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:43.814 20:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:43.814 20:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:43.814 20:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:43.815 20:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:43.815 20:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:43.815 20:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:43.815 20:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:43.815 20:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:43.815 20:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.815 20:02:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:43.815 Malloc1 00:22:44.075 [2024-07-24 20:02:31.776576] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.075 Malloc2 00:22:44.075 Malloc3 00:22:44.075 Malloc4 00:22:44.075 Malloc5 00:22:44.075 Malloc6 00:22:44.075 Malloc7 00:22:44.075 Malloc8 00:22:44.337 Malloc9 00:22:44.337 Malloc10 00:22:44.337 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.337 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:44.337 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:44.337 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:44.337 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3747304 00:22:44.337 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3747304 /var/tmp/bdevperf.sock 00:22:44.337 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3747304 ']' 00:22:44.337 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:44.337 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:44.337 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:44.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:44.337 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:44.337 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:44.337 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:44.337 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:44.337 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:22:44.337 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:22:44.337 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:44.337 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:44.337 { 00:22:44.337 "params": { 00:22:44.337 "name": "Nvme$subsystem", 00:22:44.337 "trtype": "$TEST_TRANSPORT", 00:22:44.337 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.337 "adrfam": "ipv4", 00:22:44.337 "trsvcid": "$NVMF_PORT", 00:22:44.337 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.337 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.337 "hdgst": ${hdgst:-false}, 00:22:44.337 "ddgst": ${ddgst:-false} 00:22:44.337 }, 00:22:44.337 "method": "bdev_nvme_attach_controller" 00:22:44.337 } 00:22:44.337 EOF 00:22:44.337 )") 00:22:44.337 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:44.337 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:44.337 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:44.337 { 00:22:44.337 "params": { 00:22:44.337 "name": "Nvme$subsystem", 00:22:44.337 "trtype": "$TEST_TRANSPORT", 00:22:44.337 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.337 "adrfam": "ipv4", 00:22:44.337 "trsvcid": "$NVMF_PORT", 00:22:44.337 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.337 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.337 "hdgst": ${hdgst:-false}, 00:22:44.337 "ddgst": ${ddgst:-false} 00:22:44.337 }, 00:22:44.337 "method": "bdev_nvme_attach_controller" 00:22:44.337 } 00:22:44.337 EOF 00:22:44.337 )") 00:22:44.337 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:44.337 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:44.337 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:44.337 { 00:22:44.337 "params": { 00:22:44.337 "name": "Nvme$subsystem", 00:22:44.337 "trtype": "$TEST_TRANSPORT", 00:22:44.337 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.337 "adrfam": "ipv4", 00:22:44.337 "trsvcid": "$NVMF_PORT", 00:22:44.337 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.337 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.337 "hdgst": ${hdgst:-false}, 00:22:44.337 "ddgst": ${ddgst:-false} 00:22:44.337 }, 00:22:44.337 "method": "bdev_nvme_attach_controller" 00:22:44.337 } 00:22:44.337 EOF 00:22:44.337 )") 00:22:44.337 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:44.337 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:44.337 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:44.337 { 00:22:44.337 "params": { 00:22:44.337 "name": "Nvme$subsystem", 00:22:44.337 "trtype": "$TEST_TRANSPORT", 00:22:44.337 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.337 "adrfam": "ipv4", 00:22:44.337 "trsvcid": "$NVMF_PORT", 00:22:44.337 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.337 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.337 "hdgst": ${hdgst:-false}, 00:22:44.337 "ddgst": ${ddgst:-false} 00:22:44.337 }, 00:22:44.337 "method": "bdev_nvme_attach_controller" 00:22:44.337 } 00:22:44.337 EOF 00:22:44.337 )") 00:22:44.337 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:44.338 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:44.338 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:44.338 { 00:22:44.338 "params": { 00:22:44.338 "name": "Nvme$subsystem", 00:22:44.338 "trtype": "$TEST_TRANSPORT", 00:22:44.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.338 "adrfam": "ipv4", 00:22:44.338 "trsvcid": "$NVMF_PORT", 00:22:44.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.338 "hdgst": ${hdgst:-false}, 00:22:44.338 "ddgst": ${ddgst:-false} 00:22:44.338 }, 00:22:44.338 "method": "bdev_nvme_attach_controller" 00:22:44.338 } 00:22:44.338 EOF 00:22:44.338 )") 00:22:44.338 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:44.338 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:44.338 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:44.338 { 00:22:44.338 "params": { 00:22:44.338 "name": "Nvme$subsystem", 00:22:44.338 "trtype": "$TEST_TRANSPORT", 00:22:44.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.338 "adrfam": "ipv4", 00:22:44.338 "trsvcid": "$NVMF_PORT", 00:22:44.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.338 "hdgst": ${hdgst:-false}, 00:22:44.338 "ddgst": ${ddgst:-false} 00:22:44.338 }, 00:22:44.338 "method": "bdev_nvme_attach_controller" 00:22:44.338 } 00:22:44.338 EOF 00:22:44.338 )") 00:22:44.338 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:44.338 [2024-07-24 20:02:32.213965] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:22:44.338 [2024-07-24 20:02:32.214019] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3747304 ] 00:22:44.338 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:44.338 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:44.338 { 00:22:44.338 "params": { 00:22:44.338 "name": "Nvme$subsystem", 00:22:44.338 "trtype": "$TEST_TRANSPORT", 00:22:44.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.338 "adrfam": "ipv4", 00:22:44.338 "trsvcid": "$NVMF_PORT", 00:22:44.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.338 "hdgst": ${hdgst:-false}, 00:22:44.338 "ddgst": ${ddgst:-false} 00:22:44.338 }, 00:22:44.338 "method": "bdev_nvme_attach_controller" 00:22:44.338 } 00:22:44.338 EOF 00:22:44.338 )") 00:22:44.338 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:44.338 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:44.338 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:44.338 { 00:22:44.338 "params": { 00:22:44.338 "name": "Nvme$subsystem", 00:22:44.338 "trtype": "$TEST_TRANSPORT", 00:22:44.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.338 "adrfam": "ipv4", 00:22:44.338 "trsvcid": "$NVMF_PORT", 00:22:44.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.338 "hdgst": ${hdgst:-false}, 00:22:44.338 "ddgst": ${ddgst:-false} 00:22:44.338 }, 00:22:44.338 "method": "bdev_nvme_attach_controller" 00:22:44.338 } 00:22:44.338 EOF 00:22:44.338 )") 00:22:44.338 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:44.338 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:44.338 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:44.338 { 00:22:44.338 "params": { 00:22:44.338 "name": "Nvme$subsystem", 00:22:44.338 "trtype": "$TEST_TRANSPORT", 00:22:44.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.338 "adrfam": "ipv4", 00:22:44.338 "trsvcid": "$NVMF_PORT", 00:22:44.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.338 "hdgst": ${hdgst:-false}, 00:22:44.338 "ddgst": ${ddgst:-false} 00:22:44.338 }, 00:22:44.338 "method": "bdev_nvme_attach_controller" 00:22:44.338 } 00:22:44.338 EOF 00:22:44.338 )") 00:22:44.338 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:44.338 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:44.338 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:44.338 { 00:22:44.338 "params": { 00:22:44.338 "name": "Nvme$subsystem", 00:22:44.338 "trtype": "$TEST_TRANSPORT", 00:22:44.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.338 "adrfam": "ipv4", 00:22:44.338 "trsvcid": "$NVMF_PORT", 00:22:44.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.338 "hdgst": ${hdgst:-false}, 00:22:44.338 "ddgst": ${ddgst:-false} 00:22:44.338 }, 00:22:44.338 "method": "bdev_nvme_attach_controller" 00:22:44.338 } 00:22:44.338 EOF 00:22:44.338 )") 00:22:44.338 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:44.338 EAL: No free 2048 kB hugepages reported on node 1 00:22:44.338 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:22:44.338 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:22:44.338 20:02:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:44.338 "params": { 00:22:44.338 "name": "Nvme1", 00:22:44.338 "trtype": "tcp", 00:22:44.338 "traddr": "10.0.0.2", 00:22:44.338 "adrfam": "ipv4", 00:22:44.338 "trsvcid": "4420", 00:22:44.338 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.338 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:44.338 "hdgst": false, 00:22:44.338 "ddgst": false 00:22:44.338 }, 00:22:44.338 "method": "bdev_nvme_attach_controller" 00:22:44.338 },{ 00:22:44.338 "params": { 00:22:44.338 "name": "Nvme2", 00:22:44.338 "trtype": "tcp", 00:22:44.338 "traddr": "10.0.0.2", 00:22:44.338 "adrfam": "ipv4", 00:22:44.338 "trsvcid": "4420", 00:22:44.338 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:44.338 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:44.338 "hdgst": false, 00:22:44.338 "ddgst": false 00:22:44.338 }, 00:22:44.338 "method": "bdev_nvme_attach_controller" 00:22:44.338 },{ 00:22:44.338 "params": { 00:22:44.338 "name": "Nvme3", 00:22:44.338 "trtype": "tcp", 00:22:44.338 "traddr": "10.0.0.2", 00:22:44.338 "adrfam": "ipv4", 00:22:44.338 "trsvcid": "4420", 00:22:44.338 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:44.338 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:44.338 "hdgst": false, 00:22:44.338 "ddgst": false 00:22:44.338 }, 00:22:44.338 "method": "bdev_nvme_attach_controller" 00:22:44.338 },{ 00:22:44.338 "params": { 00:22:44.338 "name": "Nvme4", 00:22:44.338 "trtype": "tcp", 00:22:44.338 "traddr": "10.0.0.2", 00:22:44.338 "adrfam": "ipv4", 00:22:44.338 "trsvcid": "4420", 00:22:44.338 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:44.338 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:44.338 "hdgst": false, 00:22:44.338 "ddgst": false 00:22:44.338 }, 00:22:44.338 "method": "bdev_nvme_attach_controller" 00:22:44.338 },{ 00:22:44.338 "params": { 00:22:44.338 "name": "Nvme5", 00:22:44.338 "trtype": "tcp", 00:22:44.338 "traddr": "10.0.0.2", 00:22:44.338 "adrfam": "ipv4", 00:22:44.338 "trsvcid": "4420", 00:22:44.338 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:44.338 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:44.338 "hdgst": false, 00:22:44.338 "ddgst": false 00:22:44.338 }, 00:22:44.338 "method": "bdev_nvme_attach_controller" 00:22:44.338 },{ 00:22:44.338 "params": { 00:22:44.338 "name": "Nvme6", 00:22:44.338 "trtype": "tcp", 00:22:44.338 "traddr": "10.0.0.2", 00:22:44.338 "adrfam": "ipv4", 00:22:44.338 "trsvcid": "4420", 00:22:44.338 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:44.338 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:44.338 "hdgst": false, 00:22:44.338 "ddgst": false 00:22:44.338 }, 00:22:44.338 "method": "bdev_nvme_attach_controller" 00:22:44.338 },{ 00:22:44.338 "params": { 00:22:44.338 "name": "Nvme7", 00:22:44.338 "trtype": "tcp", 00:22:44.338 "traddr": "10.0.0.2", 00:22:44.338 "adrfam": "ipv4", 00:22:44.338 "trsvcid": "4420", 00:22:44.338 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:44.338 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:44.338 "hdgst": false, 00:22:44.338 "ddgst": false 00:22:44.338 }, 00:22:44.338 "method": "bdev_nvme_attach_controller" 00:22:44.338 },{ 00:22:44.338 "params": { 00:22:44.338 "name": "Nvme8", 00:22:44.338 "trtype": "tcp", 00:22:44.338 "traddr": "10.0.0.2", 00:22:44.338 "adrfam": "ipv4", 00:22:44.338 "trsvcid": "4420", 00:22:44.339 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:44.339 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:44.339 "hdgst": false, 00:22:44.339 "ddgst": false 00:22:44.339 }, 00:22:44.339 "method": "bdev_nvme_attach_controller" 00:22:44.339 },{ 00:22:44.339 "params": { 00:22:44.339 "name": "Nvme9", 00:22:44.339 "trtype": "tcp", 00:22:44.339 "traddr": "10.0.0.2", 00:22:44.339 "adrfam": "ipv4", 00:22:44.339 "trsvcid": "4420", 00:22:44.339 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:44.339 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:44.339 "hdgst": false, 00:22:44.339 "ddgst": false 00:22:44.339 }, 00:22:44.339 "method": "bdev_nvme_attach_controller" 00:22:44.339 },{ 00:22:44.339 "params": { 00:22:44.339 "name": "Nvme10", 00:22:44.339 "trtype": "tcp", 00:22:44.339 "traddr": "10.0.0.2", 00:22:44.339 "adrfam": "ipv4", 00:22:44.339 "trsvcid": "4420", 00:22:44.339 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:44.339 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:44.339 "hdgst": false, 00:22:44.339 "ddgst": false 00:22:44.339 }, 00:22:44.339 "method": "bdev_nvme_attach_controller" 00:22:44.339 }' 00:22:44.339 [2024-07-24 20:02:32.274173] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.599 [2024-07-24 20:02:32.339411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.058 Running I/O for 10 seconds... 00:22:46.058 20:02:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:46.058 20:02:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:22:46.058 20:02:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:46.058 20:02:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.058 20:02:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:46.058 20:02:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.058 20:02:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:46.058 20:02:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:46.058 20:02:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:46.058 20:02:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:22:46.058 20:02:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:22:46.058 20:02:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:46.058 20:02:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:46.058 20:02:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:46.058 20:02:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:46.058 20:02:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.058 20:02:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:46.058 20:02:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.058 20:02:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:22:46.058 20:02:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:22:46.058 20:02:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:46.319 20:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:46.319 20:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:46.319 20:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:46.319 20:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:46.319 20:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.319 20:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:46.580 20:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.580 20:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:22:46.580 20:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:22:46.580 20:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:46.841 20:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:46.841 20:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:46.841 20:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:46.841 20:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:46.841 20:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.841 20:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:46.841 20:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.841 20:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:22:46.841 20:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:22:46.841 20:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:22:46.841 20:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:22:46.841 20:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:22:46.841 20:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3747304 00:22:46.841 20:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 3747304 ']' 00:22:46.841 20:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 3747304 00:22:46.841 20:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:22:46.841 20:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:46.841 20:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3747304 00:22:46.841 20:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:46.841 20:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:46.841 20:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3747304' 00:22:46.841 killing process with pid 3747304 00:22:46.841 20:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 3747304 00:22:46.841 20:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 3747304 00:22:47.102 Received shutdown signal, test time was about 1.059418 seconds 00:22:47.102 00:22:47.102 Latency(us) 00:22:47.102 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.102 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.102 Verification LBA range: start 0x0 length 0x400 00:22:47.102 Nvme1n1 : 0.99 194.43 12.15 0.00 0.00 324668.30 22937.60 274377.39 00:22:47.102 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.102 Verification LBA range: start 0x0 length 0x400 00:22:47.102 Nvme2n1 : 1.01 254.20 15.89 0.00 0.00 242647.25 23592.96 258648.75 00:22:47.102 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.102 Verification LBA range: start 0x0 length 0x400 00:22:47.102 Nvme3n1 : 0.99 193.32 12.08 0.00 0.00 311121.35 24903.68 290106.03 00:22:47.102 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.102 Verification LBA range: start 0x0 length 0x400 00:22:47.102 Nvme4n1 : 1.06 241.84 15.12 0.00 0.00 234645.55 23265.28 262144.00 00:22:47.102 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.102 Verification LBA range: start 0x0 length 0x400 00:22:47.103 Nvme5n1 : 1.00 192.55 12.03 0.00 0.00 298184.53 24576.00 262144.00 00:22:47.103 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.103 Verification LBA range: start 0x0 length 0x400 00:22:47.103 Nvme6n1 : 1.00 191.96 12.00 0.00 0.00 291991.89 37573.97 270882.13 00:22:47.103 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.103 Verification LBA range: start 0x0 length 0x400 00:22:47.103 Nvme7n1 : 1.02 251.47 15.72 0.00 0.00 217808.64 23046.83 262144.00 00:22:47.103 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.103 Verification LBA range: start 0x0 length 0x400 00:22:47.103 Nvme8n1 : 1.01 190.84 11.93 0.00 0.00 278775.18 24029.87 291853.65 00:22:47.103 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.103 Verification LBA range: start 0x0 length 0x400 00:22:47.103 Nvme9n1 : 1.01 189.64 11.85 0.00 0.00 273607.68 30801.92 318068.05 00:22:47.103 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.103 Verification LBA range: start 0x0 length 0x400 00:22:47.103 Nvme10n1 : 0.98 277.53 17.35 0.00 0.00 176806.25 10431.15 246415.36 00:22:47.103 =================================================================================================================== 00:22:47.103 Total : 2177.80 136.11 0.00 0.00 258886.98 10431.15 318068.05 00:22:47.103 20:02:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:22:48.047 20:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3747086 00:22:48.047 20:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:22:48.047 20:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:48.047 20:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:48.047 20:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:48.047 20:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:48.047 20:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:48.047 20:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:22:48.047 20:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:48.047 20:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:22:48.047 20:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:48.047 20:02:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:48.047 rmmod nvme_tcp 00:22:48.047 rmmod nvme_fabrics 00:22:48.308 rmmod nvme_keyring 00:22:48.308 20:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:48.308 20:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:22:48.308 20:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:22:48.308 20:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3747086 ']' 00:22:48.308 20:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3747086 00:22:48.308 20:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 3747086 ']' 00:22:48.308 20:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 3747086 00:22:48.308 20:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:22:48.308 20:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:48.308 20:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3747086 00:22:48.308 20:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:48.308 20:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:48.308 20:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3747086' 00:22:48.308 killing process with pid 3747086 00:22:48.308 20:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 3747086 00:22:48.308 20:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 3747086 00:22:48.569 20:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:48.569 20:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:48.569 20:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:48.569 20:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:48.569 20:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:48.569 20:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.569 20:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:48.569 20:02:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.486 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:50.486 00:22:50.486 real 0m8.000s 00:22:50.486 user 0m24.167s 00:22:50.486 sys 0m1.307s 00:22:50.486 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:50.486 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:50.486 ************************************ 00:22:50.486 END TEST nvmf_shutdown_tc2 00:22:50.486 ************************************ 00:22:50.486 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:50.486 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:50.486 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:50.486 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:50.748 ************************************ 00:22:50.748 START TEST nvmf_shutdown_tc3 00:22:50.748 ************************************ 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:50.748 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:50.748 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:50.748 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:50.749 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:50.749 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:50.749 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:51.011 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:51.011 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:51.011 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:51.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:51.011 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:22:51.011 00:22:51.011 --- 10.0.0.2 ping statistics --- 00:22:51.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.011 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:22:51.011 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:51.011 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:51.011 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.369 ms 00:22:51.011 00:22:51.011 --- 10.0.0.1 ping statistics --- 00:22:51.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.011 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:22:51.011 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:51.011 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:22:51.011 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:51.011 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:51.011 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:51.011 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:51.011 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:51.011 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:51.011 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:51.011 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:51.011 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:51.011 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:51.011 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:51.011 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3748756 00:22:51.011 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3748756 00:22:51.011 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3748756 ']' 00:22:51.011 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:51.011 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.011 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:51.011 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.011 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:51.011 20:02:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:51.011 [2024-07-24 20:02:38.936360] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:22:51.011 [2024-07-24 20:02:38.936425] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.273 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.273 [2024-07-24 20:02:39.023904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:51.273 [2024-07-24 20:02:39.084694] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.273 [2024-07-24 20:02:39.084728] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.273 [2024-07-24 20:02:39.084733] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.273 [2024-07-24 20:02:39.084738] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.273 [2024-07-24 20:02:39.084742] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.273 [2024-07-24 20:02:39.084844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.273 [2024-07-24 20:02:39.085003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:51.273 [2024-07-24 20:02:39.085156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.273 [2024-07-24 20:02:39.085157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:51.846 20:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:51.846 20:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:51.846 20:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:51.846 20:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:51.846 20:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:51.846 20:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.846 20:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:51.846 20:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.846 20:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:51.846 [2024-07-24 20:02:39.755741] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:51.846 20:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.846 20:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:51.846 20:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:51.846 20:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:51.846 20:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:51.846 20:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:51.846 20:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.846 20:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:51.846 20:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.846 20:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:51.846 20:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.846 20:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:51.846 20:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.846 20:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:51.846 20:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.846 20:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:51.846 20:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:51.846 20:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:52.108 20:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:52.108 20:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:52.108 20:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:52.108 20:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:52.108 20:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:52.108 20:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:52.108 20:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:52.108 20:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:52.108 20:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:52.108 20:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.108 20:02:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:52.108 Malloc1 00:22:52.108 [2024-07-24 20:02:39.854430] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.108 Malloc2 00:22:52.108 Malloc3 00:22:52.108 Malloc4 00:22:52.108 Malloc5 00:22:52.108 Malloc6 00:22:52.108 Malloc7 00:22:52.370 Malloc8 00:22:52.370 Malloc9 00:22:52.370 Malloc10 00:22:52.370 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.370 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:52.370 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:52.370 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:52.370 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3749137 00:22:52.370 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3749137 /var/tmp/bdevperf.sock 00:22:52.370 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3749137 ']' 00:22:52.370 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:52.370 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:52.370 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:52.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:52.370 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:52.370 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:52.370 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:52.370 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:52.370 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:22:52.370 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:22:52.370 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:52.370 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:52.370 { 00:22:52.370 "params": { 00:22:52.370 "name": "Nvme$subsystem", 00:22:52.370 "trtype": "$TEST_TRANSPORT", 00:22:52.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.370 "adrfam": "ipv4", 00:22:52.370 "trsvcid": "$NVMF_PORT", 00:22:52.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.370 "hdgst": ${hdgst:-false}, 00:22:52.370 "ddgst": ${ddgst:-false} 00:22:52.370 }, 00:22:52.370 "method": "bdev_nvme_attach_controller" 00:22:52.370 } 00:22:52.370 EOF 00:22:52.370 )") 00:22:52.370 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:52.370 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:52.370 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:52.370 { 00:22:52.370 "params": { 00:22:52.370 "name": "Nvme$subsystem", 00:22:52.370 "trtype": "$TEST_TRANSPORT", 00:22:52.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.370 "adrfam": "ipv4", 00:22:52.370 "trsvcid": "$NVMF_PORT", 00:22:52.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.370 "hdgst": ${hdgst:-false}, 00:22:52.370 "ddgst": ${ddgst:-false} 00:22:52.370 }, 00:22:52.370 "method": "bdev_nvme_attach_controller" 00:22:52.370 } 00:22:52.370 EOF 00:22:52.370 )") 00:22:52.370 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:52.370 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:52.370 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:52.370 { 00:22:52.370 "params": { 00:22:52.370 "name": "Nvme$subsystem", 00:22:52.370 "trtype": "$TEST_TRANSPORT", 00:22:52.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.370 "adrfam": "ipv4", 00:22:52.370 "trsvcid": "$NVMF_PORT", 00:22:52.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.370 "hdgst": ${hdgst:-false}, 00:22:52.370 "ddgst": ${ddgst:-false} 00:22:52.370 }, 00:22:52.370 "method": "bdev_nvme_attach_controller" 00:22:52.370 } 00:22:52.370 EOF 00:22:52.370 )") 00:22:52.370 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:52.370 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:52.370 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:52.370 { 00:22:52.370 "params": { 00:22:52.370 "name": "Nvme$subsystem", 00:22:52.370 "trtype": "$TEST_TRANSPORT", 00:22:52.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.370 "adrfam": "ipv4", 00:22:52.370 "trsvcid": "$NVMF_PORT", 00:22:52.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.370 "hdgst": ${hdgst:-false}, 00:22:52.370 "ddgst": ${ddgst:-false} 00:22:52.370 }, 00:22:52.370 "method": "bdev_nvme_attach_controller" 00:22:52.370 } 00:22:52.370 EOF 00:22:52.370 )") 00:22:52.370 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:52.370 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:52.370 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:52.370 { 00:22:52.370 "params": { 00:22:52.370 "name": "Nvme$subsystem", 00:22:52.370 "trtype": "$TEST_TRANSPORT", 00:22:52.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.370 "adrfam": "ipv4", 00:22:52.370 "trsvcid": "$NVMF_PORT", 00:22:52.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.370 "hdgst": ${hdgst:-false}, 00:22:52.370 "ddgst": ${ddgst:-false} 00:22:52.370 }, 00:22:52.370 "method": "bdev_nvme_attach_controller" 00:22:52.370 } 00:22:52.370 EOF 00:22:52.370 )") 00:22:52.370 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:52.370 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:52.370 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:52.370 { 00:22:52.370 "params": { 00:22:52.370 "name": "Nvme$subsystem", 00:22:52.370 "trtype": "$TEST_TRANSPORT", 00:22:52.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.370 "adrfam": "ipv4", 00:22:52.370 "trsvcid": "$NVMF_PORT", 00:22:52.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.370 "hdgst": ${hdgst:-false}, 00:22:52.370 "ddgst": ${ddgst:-false} 00:22:52.370 }, 00:22:52.370 "method": "bdev_nvme_attach_controller" 00:22:52.370 } 00:22:52.370 EOF 00:22:52.370 )") 00:22:52.370 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:52.370 [2024-07-24 20:02:40.299432] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:22:52.370 [2024-07-24 20:02:40.299485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3749137 ] 00:22:52.370 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:52.370 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:52.370 { 00:22:52.370 "params": { 00:22:52.370 "name": "Nvme$subsystem", 00:22:52.370 "trtype": "$TEST_TRANSPORT", 00:22:52.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.370 "adrfam": "ipv4", 00:22:52.370 "trsvcid": "$NVMF_PORT", 00:22:52.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.370 "hdgst": ${hdgst:-false}, 00:22:52.370 "ddgst": ${ddgst:-false} 00:22:52.370 }, 00:22:52.370 "method": "bdev_nvme_attach_controller" 00:22:52.370 } 00:22:52.370 EOF 00:22:52.370 )") 00:22:52.370 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:52.370 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:52.371 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:52.371 { 00:22:52.371 "params": { 00:22:52.371 "name": "Nvme$subsystem", 00:22:52.371 "trtype": "$TEST_TRANSPORT", 00:22:52.371 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.371 "adrfam": "ipv4", 00:22:52.371 "trsvcid": "$NVMF_PORT", 00:22:52.371 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.371 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.371 "hdgst": ${hdgst:-false}, 00:22:52.371 "ddgst": ${ddgst:-false} 00:22:52.371 }, 00:22:52.371 "method": "bdev_nvme_attach_controller" 00:22:52.371 } 00:22:52.371 EOF 00:22:52.371 )") 00:22:52.371 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:52.371 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:52.371 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:52.371 { 00:22:52.371 "params": { 00:22:52.371 "name": "Nvme$subsystem", 00:22:52.371 "trtype": "$TEST_TRANSPORT", 00:22:52.371 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.371 "adrfam": "ipv4", 00:22:52.371 "trsvcid": "$NVMF_PORT", 00:22:52.371 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.371 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.371 "hdgst": ${hdgst:-false}, 00:22:52.371 "ddgst": ${ddgst:-false} 00:22:52.371 }, 00:22:52.371 "method": "bdev_nvme_attach_controller" 00:22:52.371 } 00:22:52.371 EOF 00:22:52.371 )") 00:22:52.371 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:52.632 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:52.632 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:52.632 { 00:22:52.632 "params": { 00:22:52.632 "name": "Nvme$subsystem", 00:22:52.632 "trtype": "$TEST_TRANSPORT", 00:22:52.632 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.632 "adrfam": "ipv4", 00:22:52.632 "trsvcid": "$NVMF_PORT", 00:22:52.632 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.632 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.632 "hdgst": ${hdgst:-false}, 00:22:52.632 "ddgst": ${ddgst:-false} 00:22:52.632 }, 00:22:52.632 "method": "bdev_nvme_attach_controller" 00:22:52.632 } 00:22:52.632 EOF 00:22:52.632 )") 00:22:52.632 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.632 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:52.632 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:22:52.632 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:22:52.632 20:02:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:52.632 "params": { 00:22:52.632 "name": "Nvme1", 00:22:52.632 "trtype": "tcp", 00:22:52.632 "traddr": "10.0.0.2", 00:22:52.632 "adrfam": "ipv4", 00:22:52.632 "trsvcid": "4420", 00:22:52.632 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:52.632 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:52.632 "hdgst": false, 00:22:52.632 "ddgst": false 00:22:52.632 }, 00:22:52.632 "method": "bdev_nvme_attach_controller" 00:22:52.632 },{ 00:22:52.632 "params": { 00:22:52.632 "name": "Nvme2", 00:22:52.632 "trtype": "tcp", 00:22:52.632 "traddr": "10.0.0.2", 00:22:52.632 "adrfam": "ipv4", 00:22:52.632 "trsvcid": "4420", 00:22:52.632 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:52.632 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:52.632 "hdgst": false, 00:22:52.632 "ddgst": false 00:22:52.632 }, 00:22:52.632 "method": "bdev_nvme_attach_controller" 00:22:52.632 },{ 00:22:52.632 "params": { 00:22:52.632 "name": "Nvme3", 00:22:52.632 "trtype": "tcp", 00:22:52.632 "traddr": "10.0.0.2", 00:22:52.632 "adrfam": "ipv4", 00:22:52.632 "trsvcid": "4420", 00:22:52.632 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:52.632 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:52.632 "hdgst": false, 00:22:52.632 "ddgst": false 00:22:52.632 }, 00:22:52.632 "method": "bdev_nvme_attach_controller" 00:22:52.632 },{ 00:22:52.632 "params": { 00:22:52.632 "name": "Nvme4", 00:22:52.632 "trtype": "tcp", 00:22:52.632 "traddr": "10.0.0.2", 00:22:52.632 "adrfam": "ipv4", 00:22:52.632 "trsvcid": "4420", 00:22:52.632 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:52.632 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:52.632 "hdgst": false, 00:22:52.632 "ddgst": false 00:22:52.632 }, 00:22:52.632 "method": "bdev_nvme_attach_controller" 00:22:52.632 },{ 00:22:52.632 "params": { 00:22:52.632 "name": "Nvme5", 00:22:52.632 "trtype": "tcp", 00:22:52.632 "traddr": "10.0.0.2", 00:22:52.632 "adrfam": "ipv4", 00:22:52.632 "trsvcid": "4420", 00:22:52.632 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:52.632 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:52.632 "hdgst": false, 00:22:52.632 "ddgst": false 00:22:52.632 }, 00:22:52.632 "method": "bdev_nvme_attach_controller" 00:22:52.632 },{ 00:22:52.632 "params": { 00:22:52.632 "name": "Nvme6", 00:22:52.632 "trtype": "tcp", 00:22:52.632 "traddr": "10.0.0.2", 00:22:52.632 "adrfam": "ipv4", 00:22:52.633 "trsvcid": "4420", 00:22:52.633 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:52.633 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:52.633 "hdgst": false, 00:22:52.633 "ddgst": false 00:22:52.633 }, 00:22:52.633 "method": "bdev_nvme_attach_controller" 00:22:52.633 },{ 00:22:52.633 "params": { 00:22:52.633 "name": "Nvme7", 00:22:52.633 "trtype": "tcp", 00:22:52.633 "traddr": "10.0.0.2", 00:22:52.633 "adrfam": "ipv4", 00:22:52.633 "trsvcid": "4420", 00:22:52.633 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:52.633 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:52.633 "hdgst": false, 00:22:52.633 "ddgst": false 00:22:52.633 }, 00:22:52.633 "method": "bdev_nvme_attach_controller" 00:22:52.633 },{ 00:22:52.633 "params": { 00:22:52.633 "name": "Nvme8", 00:22:52.633 "trtype": "tcp", 00:22:52.633 "traddr": "10.0.0.2", 00:22:52.633 "adrfam": "ipv4", 00:22:52.633 "trsvcid": "4420", 00:22:52.633 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:52.633 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:52.633 "hdgst": false, 00:22:52.633 "ddgst": false 00:22:52.633 }, 00:22:52.633 "method": "bdev_nvme_attach_controller" 00:22:52.633 },{ 00:22:52.633 "params": { 00:22:52.633 "name": "Nvme9", 00:22:52.633 "trtype": "tcp", 00:22:52.633 "traddr": "10.0.0.2", 00:22:52.633 "adrfam": "ipv4", 00:22:52.633 "trsvcid": "4420", 00:22:52.633 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:52.633 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:52.633 "hdgst": false, 00:22:52.633 "ddgst": false 00:22:52.633 }, 00:22:52.633 "method": "bdev_nvme_attach_controller" 00:22:52.633 },{ 00:22:52.633 "params": { 00:22:52.633 "name": "Nvme10", 00:22:52.633 "trtype": "tcp", 00:22:52.633 "traddr": "10.0.0.2", 00:22:52.633 "adrfam": "ipv4", 00:22:52.633 "trsvcid": "4420", 00:22:52.633 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:52.633 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:52.633 "hdgst": false, 00:22:52.633 "ddgst": false 00:22:52.633 }, 00:22:52.633 "method": "bdev_nvme_attach_controller" 00:22:52.633 }' 00:22:52.633 [2024-07-24 20:02:40.358840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.633 [2024-07-24 20:02:40.423432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.019 Running I/O for 10 seconds... 00:22:54.280 20:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:54.280 20:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:54.280 20:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:54.280 20:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.280 20:02:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:54.280 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.280 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:54.280 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:54.280 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:54.280 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:54.280 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:22:54.280 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:22:54.280 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:54.280 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:54.280 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:54.280 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:54.280 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.280 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:54.280 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.280 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:22:54.280 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:22:54.280 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:54.541 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:54.541 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:54.541 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:54.541 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:54.541 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.541 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:54.824 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.824 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=130 00:22:54.824 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 130 -ge 100 ']' 00:22:54.824 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:22:54.824 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:22:54.824 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:22:54.824 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3748756 00:22:54.824 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 3748756 ']' 00:22:54.824 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 3748756 00:22:54.824 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:22:54.824 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:54.824 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3748756 00:22:54.824 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:54.824 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:54.824 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3748756' 00:22:54.824 killing process with pid 3748756 00:22:54.824 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 3748756 00:22:54.824 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 3748756 00:22:54.824 [2024-07-24 20:02:42.575506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.824 [2024-07-24 20:02:42.575555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.824 [2024-07-24 20:02:42.575561] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.824 [2024-07-24 20:02:42.575566] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.824 [2024-07-24 20:02:42.575571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.824 [2024-07-24 20:02:42.575575] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575594] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575607] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575652] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575657] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575661] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575666] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575671] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575680] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575685] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575690] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575699] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575716] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575722] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575731] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575735] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575744] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575748] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575759] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575773] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575779] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575784] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575795] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575809] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575814] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575818] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575830] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575839] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575843] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.575847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa61fe0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.577550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.825 [2024-07-24 20:02:42.577585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.825 [2024-07-24 20:02:42.577596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.825 [2024-07-24 20:02:42.577604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.825 [2024-07-24 20:02:42.577612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.825 [2024-07-24 20:02:42.577620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.825 [2024-07-24 20:02:42.577628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.825 [2024-07-24 20:02:42.577636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.825 [2024-07-24 20:02:42.577643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc295d0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.577797] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.577821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.577827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.577836] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.577841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.577846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.577851] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.577855] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.577860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.577865] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.577870] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.577875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.825 [2024-07-24 20:02:42.577880] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.577885] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.577889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.577893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.577898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.577903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.577907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.577911] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.577916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.577921] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.577926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.577930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.577935] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.577940] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.577945] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.577949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.577953] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.577958] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.577964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.577969] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.577974] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.577979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.577983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.577987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.577992] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.577996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.578001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.578005] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.578010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.578016] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.578021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.578026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.578030] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.578036] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.578041] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.578045] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.578050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.578054] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.578058] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.578063] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.578068] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.578073] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.578079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.578083] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.578088] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.578092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa624a0 is same with the state(5) to be set 00:22:54.826 [2024-07-24 20:02:42.579007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-07-24 20:02:42.579029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-07-24 20:02:42.579045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-07-24 20:02:42.579054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-07-24 20:02:42.579063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-07-24 20:02:42.579071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-07-24 20:02:42.579081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-07-24 20:02:42.579088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-07-24 20:02:42.579098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-07-24 20:02:42.579105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-07-24 20:02:42.579114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-07-24 20:02:42.579122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-07-24 20:02:42.579132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-07-24 20:02:42.579140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-07-24 20:02:42.579151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-07-24 20:02:42.579160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-07-24 20:02:42.579171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-07-24 20:02:42.579179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-07-24 20:02:42.579189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-07-24 20:02:42.579196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-07-24 20:02:42.579211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-07-24 20:02:42.579220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-07-24 20:02:42.579232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-07-24 20:02:42.579241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-07-24 20:02:42.579250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-07-24 20:02:42.579263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-07-24 20:02:42.579274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-07-24 20:02:42.579282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-07-24 20:02:42.579293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-07-24 20:02:42.579302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-07-24 20:02:42.579311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-07-24 20:02:42.579319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-07-24 20:02:42.579328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-07-24 20:02:42.579336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-07-24 20:02:42.579346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-07-24 20:02:42.579353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-07-24 20:02:42.579364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-07-24 20:02:42.579372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-07-24 20:02:42.579381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-07-24 20:02:42.579379] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with t[2024-07-24 20:02:42.579389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:22:54.827 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-07-24 20:02:42.579403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with t[2024-07-24 20:02:42.579403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128he state(5) to be set 00:22:54.827 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-07-24 20:02:42.579411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-07-24 20:02:42.579416] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-07-24 20:02:42.579427] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579432] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-07-24 20:02:42.579442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-07-24 20:02:42.579453] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-07-24 20:02:42.579458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579464] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-07-24 20:02:42.579469] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 20:02:42.579475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 he state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579484] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-07-24 20:02:42.579489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-07-24 20:02:42.579499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-07-24 20:02:42.579509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 20:02:42.579514] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 he state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579521] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:12[2024-07-24 20:02:42.579525] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 he state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579533] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with t[2024-07-24 20:02:42.579533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:22:54.827 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-07-24 20:02:42.579541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-07-24 20:02:42.579552] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-07-24 20:02:42.579557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-07-24 20:02:42.579567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-07-24 20:02:42.579578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579583] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-07-24 20:02:42.579588] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-07-24 20:02:42.579594] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579599] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-07-24 20:02:42.579604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-07-24 20:02:42.579615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-07-24 20:02:42.579624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-07-24 20:02:42.579637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-07-24 20:02:42.579647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579652] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with t[2024-07-24 20:02:42.579651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:22:54.827 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-07-24 20:02:42.579658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579663] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-07-24 20:02:42.579668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.827 [2024-07-24 20:02:42.579671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-07-24 20:02:42.579673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.828 [2024-07-24 20:02:42.579679] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.828 [2024-07-24 20:02:42.579682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.828 [2024-07-24 20:02:42.579684] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.828 [2024-07-24 20:02:42.579689] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.828 [2024-07-24 20:02:42.579690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-07-24 20:02:42.579695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.828 [2024-07-24 20:02:42.579701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with t[2024-07-24 20:02:42.579700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:12he state(5) to be set 00:22:54.828 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.828 [2024-07-24 20:02:42.579707] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.828 [2024-07-24 20:02:42.579710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-07-24 20:02:42.579712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.828 [2024-07-24 20:02:42.579718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.828 [2024-07-24 20:02:42.579721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.828 [2024-07-24 20:02:42.579724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.828 [2024-07-24 20:02:42.579729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.828 [2024-07-24 20:02:42.579729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-07-24 20:02:42.579735] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.828 [2024-07-24 20:02:42.579740] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.828 [2024-07-24 20:02:42.579740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.828 [2024-07-24 20:02:42.579746] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891a90 is same with the state(5) to be set 00:22:54.828 [2024-07-24 20:02:42.579749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-07-24 20:02:42.579759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.828 [2024-07-24 20:02:42.579766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-07-24 20:02:42.579775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.828 [2024-07-24 20:02:42.579783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-07-24 20:02:42.579793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.828 [2024-07-24 20:02:42.579801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-07-24 20:02:42.579810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.828 [2024-07-24 20:02:42.579817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-07-24 20:02:42.579826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.828 [2024-07-24 20:02:42.579834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-07-24 20:02:42.579844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.828 [2024-07-24 20:02:42.579852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-07-24 20:02:42.579861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.828 [2024-07-24 20:02:42.579868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-07-24 20:02:42.579877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.828 [2024-07-24 20:02:42.579885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-07-24 20:02:42.579896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.828 [2024-07-24 20:02:42.579903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-07-24 20:02:42.579912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.828 [2024-07-24 20:02:42.579919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-07-24 20:02:42.579929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.828 [2024-07-24 20:02:42.579936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-07-24 20:02:42.579945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.828 [2024-07-24 20:02:42.579952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-07-24 20:02:42.579961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.828 [2024-07-24 20:02:42.579969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-07-24 20:02:42.579978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.828 [2024-07-24 20:02:42.579985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-07-24 20:02:42.579995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.828 [2024-07-24 20:02:42.580002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-07-24 20:02:42.580012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.828 [2024-07-24 20:02:42.580019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-07-24 20:02:42.580028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.828 [2024-07-24 20:02:42.580035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-07-24 20:02:42.580045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.828 [2024-07-24 20:02:42.580053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-07-24 20:02:42.580062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.828 [2024-07-24 20:02:42.580069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-07-24 20:02:42.580078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.828 [2024-07-24 20:02:42.580085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-07-24 20:02:42.580094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.828 [2024-07-24 20:02:42.580103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-07-24 20:02:42.580112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.828 [2024-07-24 20:02:42.580119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-07-24 20:02:42.580128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.829 [2024-07-24 20:02:42.580136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-07-24 20:02:42.580145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.829 [2024-07-24 20:02:42.580153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-07-24 20:02:42.580162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.829 [2024-07-24 20:02:42.580170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-07-24 20:02:42.580181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.829 [2024-07-24 20:02:42.580188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-07-24 20:02:42.580505] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891f50 is same with t[2024-07-24 20:02:42.580509] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xcdc4d0 was disconnected and frehe state(5) to be set 00:22:54.829 ed. reset controller. 00:22:54.829 [2024-07-24 20:02:42.580650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.829 [2024-07-24 20:02:42.580664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-07-24 20:02:42.580677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.829 [2024-07-24 20:02:42.580684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-07-24 20:02:42.580694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.829 [2024-07-24 20:02:42.580701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-07-24 20:02:42.580711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.829 [2024-07-24 20:02:42.580718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-07-24 20:02:42.580728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.829 [2024-07-24 20:02:42.580735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-07-24 20:02:42.580744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.829 [2024-07-24 20:02:42.580752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-07-24 20:02:42.580761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.829 [2024-07-24 20:02:42.580772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-07-24 20:02:42.580781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.829 [2024-07-24 20:02:42.580790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-07-24 20:02:42.580800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.829 [2024-07-24 20:02:42.580808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-07-24 20:02:42.580817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.829 [2024-07-24 20:02:42.580824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-07-24 20:02:42.580833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.829 [2024-07-24 20:02:42.580841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-07-24 20:02:42.580850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.829 [2024-07-24 20:02:42.580857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-07-24 20:02:42.580867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.829 [2024-07-24 20:02:42.580873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-07-24 20:02:42.580883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.829 [2024-07-24 20:02:42.580890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-07-24 20:02:42.580900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.829 [2024-07-24 20:02:42.580906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-07-24 20:02:42.580915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.829 [2024-07-24 20:02:42.580923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-07-24 20:02:42.580933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.829 [2024-07-24 20:02:42.580940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-07-24 20:02:42.580951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.829 [2024-07-24 20:02:42.580951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.829 [2024-07-24 20:02:42.580958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-07-24 20:02:42.580966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.829 [2024-07-24 20:02:42.580969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.829 [2024-07-24 20:02:42.580974] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.829 [2024-07-24 20:02:42.580977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-07-24 20:02:42.580980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.829 [2024-07-24 20:02:42.580986] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.829 [2024-07-24 20:02:42.580987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.829 [2024-07-24 20:02:42.580991] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.829 [2024-07-24 20:02:42.580995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-07-24 20:02:42.580996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.829 [2024-07-24 20:02:42.581003] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.829 [2024-07-24 20:02:42.581004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.829 [2024-07-24 20:02:42.581008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.829 [2024-07-24 20:02:42.581013] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with t[2024-07-24 20:02:42.581012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:22:54.830 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.830 [2024-07-24 20:02:42.581021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.830 [2024-07-24 20:02:42.581026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.830 [2024-07-24 20:02:42.581039] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581044] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.830 [2024-07-24 20:02:42.581049] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.830 [2024-07-24 20:02:42.581054] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581059] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.830 [2024-07-24 20:02:42.581065] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 20:02:42.581071] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.830 he state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.830 [2024-07-24 20:02:42.581083] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581088] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.830 [2024-07-24 20:02:42.581094] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581099] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with t[2024-07-24 20:02:42.581099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:1he state(5) to be set 00:22:54.830 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.830 [2024-07-24 20:02:42.581106] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.830 [2024-07-24 20:02:42.581111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581116] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.830 [2024-07-24 20:02:42.581121] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 20:02:42.581127] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.830 he state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581134] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.830 [2024-07-24 20:02:42.581138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581144] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.830 [2024-07-24 20:02:42.581149] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:1[2024-07-24 20:02:42.581156] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.830 he state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.830 [2024-07-24 20:02:42.581168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581173] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.830 [2024-07-24 20:02:42.581179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.830 [2024-07-24 20:02:42.581184] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581190] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.830 [2024-07-24 20:02:42.581194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.830 [2024-07-24 20:02:42.581208] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.830 [2024-07-24 20:02:42.581220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581226] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with t[2024-07-24 20:02:42.581225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:22:54.830 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.830 [2024-07-24 20:02:42.581233] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581238] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with t[2024-07-24 20:02:42.581237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:1he state(5) to be set 00:22:54.830 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.830 [2024-07-24 20:02:42.581245] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.830 [2024-07-24 20:02:42.581249] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581256] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.830 [2024-07-24 20:02:42.581261] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 20:02:42.581267] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.830 he state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.830 [2024-07-24 20:02:42.581279] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.830 [2024-07-24 20:02:42.581290] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.830 [2024-07-24 20:02:42.581300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.830 [2024-07-24 20:02:42.581303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.830 [2024-07-24 20:02:42.581305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.831 [2024-07-24 20:02:42.581310] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.831 [2024-07-24 20:02:42.581313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.831 [2024-07-24 20:02:42.581315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892410 is same with the state(5) to be set 00:22:54.831 [2024-07-24 20:02:42.581321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.831 [2024-07-24 20:02:42.581331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.831 [2024-07-24 20:02:42.581339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.831 [2024-07-24 20:02:42.581347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.831 [2024-07-24 20:02:42.581355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.831 [2024-07-24 20:02:42.581365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.831 [2024-07-24 20:02:42.581372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.831 [2024-07-24 20:02:42.581383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.831 [2024-07-24 20:02:42.581390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.831 [2024-07-24 20:02:42.581401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.831 [2024-07-24 20:02:42.581409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.831 [2024-07-24 20:02:42.581418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.831 [2024-07-24 20:02:42.581425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.831 [2024-07-24 20:02:42.581434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.831 [2024-07-24 20:02:42.581441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.831 [2024-07-24 20:02:42.581451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.831 [2024-07-24 20:02:42.581459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.831 [2024-07-24 20:02:42.581468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.831 [2024-07-24 20:02:42.581475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.831 [2024-07-24 20:02:42.581484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.831 [2024-07-24 20:02:42.581492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.831 [2024-07-24 20:02:42.581501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.831 [2024-07-24 20:02:42.581508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.831 [2024-07-24 20:02:42.581517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.831 [2024-07-24 20:02:42.581524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.831 [2024-07-24 20:02:42.581533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.831 [2024-07-24 20:02:42.581541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.831 [2024-07-24 20:02:42.581550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.831 [2024-07-24 20:02:42.581557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.831 [2024-07-24 20:02:42.581566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.831 [2024-07-24 20:02:42.581573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.831 [2024-07-24 20:02:42.581582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.831 [2024-07-24 20:02:42.581591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.831 [2024-07-24 20:02:42.581600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.831 [2024-07-24 20:02:42.581607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.831 [2024-07-24 20:02:42.581616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.831 [2024-07-24 20:02:42.581623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.831 [2024-07-24 20:02:42.581632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.831 [2024-07-24 20:02:42.581640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.831 [2024-07-24 20:02:42.581649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.831 [2024-07-24 20:02:42.581656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.831 [2024-07-24 20:02:42.581665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.831 [2024-07-24 20:02:42.581672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.831 [2024-07-24 20:02:42.581683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.831 [2024-07-24 20:02:42.581690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.831 [2024-07-24 20:02:42.581699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.831 [2024-07-24 20:02:42.581706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.831 [2024-07-24 20:02:42.581715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.831 [2024-07-24 20:02:42.581722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.831 [2024-07-24 20:02:42.581731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.831 [2024-07-24 20:02:42.581739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.831 [2024-07-24 20:02:42.581749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.831 [2024-07-24 20:02:42.581756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.831 [2024-07-24 20:02:42.581766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.831 [2024-07-24 20:02:42.581774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.831 [2024-07-24 20:02:42.581818] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd98330 was disconnected and freed. reset controller. 00:22:54.831 [2024-07-24 20:02:42.581910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.831 [2024-07-24 20:02:42.581927] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.831 [2024-07-24 20:02:42.581933] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.831 [2024-07-24 20:02:42.581938] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.831 [2024-07-24 20:02:42.581943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.831 [2024-07-24 20:02:42.581948] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.831 [2024-07-24 20:02:42.581953] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.831 [2024-07-24 20:02:42.581958] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.831 [2024-07-24 20:02:42.581964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.831 [2024-07-24 20:02:42.581969] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.831 [2024-07-24 20:02:42.581974] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.831 [2024-07-24 20:02:42.581979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.831 [2024-07-24 20:02:42.581983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.831 [2024-07-24 20:02:42.581988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.831 [2024-07-24 20:02:42.581993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.831 [2024-07-24 20:02:42.581998] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.831 [2024-07-24 20:02:42.582003] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.831 [2024-07-24 20:02:42.582008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.831 [2024-07-24 20:02:42.582014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.831 [2024-07-24 20:02:42.582019] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.831 [2024-07-24 20:02:42.582024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.831 [2024-07-24 20:02:42.582029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582039] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582044] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582049] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582054] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582059] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582065] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582069] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582074] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582083] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582088] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582102] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582107] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582116] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582121] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582125] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582134] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582143] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582149] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582153] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582177] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582186] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582197] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582219] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582233] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8928f0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582923] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582933] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582952] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582957] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582962] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582985] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582989] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.582998] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.583002] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.583010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.583014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.583019] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.583023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.583027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.583032] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.583036] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.583040] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.583044] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.583050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.583054] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.583059] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.583064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.832 [2024-07-24 20:02:42.583068] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583072] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583076] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583081] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583089] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583097] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583108] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583112] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583116] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583121] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583125] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583130] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583139] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583143] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583153] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583167] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583177] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583186] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583190] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583195] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583204] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892dd0 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583801] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583806] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583810] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583839] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583854] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583858] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583878] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583883] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583888] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583892] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583897] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583901] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583906] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.583915] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.600439] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.600463] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.600472] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.600479] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.600486] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.600493] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.600499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.600505] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.600512] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.600518] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.600524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.600530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.600536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.600547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.600553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.600559] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.600565] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.600572] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.600578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.600584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.600590] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.600596] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.600604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.600610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.600616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.600623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.600629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.600635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.833 [2024-07-24 20:02:42.600642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.834 [2024-07-24 20:02:42.600648] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.834 [2024-07-24 20:02:42.600655] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.834 [2024-07-24 20:02:42.600660] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.834 [2024-07-24 20:02:42.600667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.834 [2024-07-24 20:02:42.600673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.834 [2024-07-24 20:02:42.600679] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.834 [2024-07-24 20:02:42.600685] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.834 [2024-07-24 20:02:42.600692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893290 is same with the state(5) to be set 00:22:54.834 [2024-07-24 20:02:42.601039] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:54.834 [2024-07-24 20:02:42.601094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd86d00 (9): Bad file descriptor 00:22:54.834 [2024-07-24 20:02:42.601132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc295d0 (9): Bad file descriptor 00:22:54.834 [2024-07-24 20:02:42.601168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.834 [2024-07-24 20:02:42.601179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.834 [2024-07-24 20:02:42.601189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.834 [2024-07-24 20:02:42.601199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.834 [2024-07-24 20:02:42.601216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.834 [2024-07-24 20:02:42.601225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.834 [2024-07-24 20:02:42.601236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.834 [2024-07-24 20:02:42.601244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.834 [2024-07-24 20:02:42.601253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdcd1a0 is same with the state(5) to be set 00:22:54.834 [2024-07-24 20:02:42.601280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.834 [2024-07-24 20:02:42.601280] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x893750 is same with the state(5) to be set 00:22:54.834 [2024-07-24 20:02:42.601291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.834 [2024-07-24 20:02:42.601302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.834 [2024-07-24 20:02:42.601311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.834 [2024-07-24 20:02:42.601319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.834 [2024-07-24 20:02:42.601327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.834 [2024-07-24 20:02:42.601335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.834 [2024-07-24 20:02:42.601342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.834 [2024-07-24 20:02:42.601350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdccf00 is same with the state(5) to be set 00:22:54.834 [2024-07-24 20:02:42.601374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.834 [2024-07-24 20:02:42.601384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.834 [2024-07-24 20:02:42.601391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.834 [2024-07-24 20:02:42.601399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.834 [2024-07-24 20:02:42.601408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.834 [2024-07-24 20:02:42.601415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.834 [2024-07-24 20:02:42.601423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.834 [2024-07-24 20:02:42.601433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.834 [2024-07-24 20:02:42.601440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdef770 is same with the state(5) to be set 00:22:54.834 [2024-07-24 20:02:42.601466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.834 [2024-07-24 20:02:42.601475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.834 [2024-07-24 20:02:42.601483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.834 [2024-07-24 20:02:42.601491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.834 [2024-07-24 20:02:42.601499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.834 [2024-07-24 20:02:42.601508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.834 [2024-07-24 20:02:42.601517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.834 [2024-07-24 20:02:42.601525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.834 [2024-07-24 20:02:42.601533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36e30 is same with the state(5) to be set 00:22:54.834 [2024-07-24 20:02:42.601556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.834 [2024-07-24 20:02:42.601565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.834 [2024-07-24 20:02:42.601574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.834 [2024-07-24 20:02:42.601581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.834 [2024-07-24 20:02:42.601589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.834 [2024-07-24 20:02:42.601597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.834 [2024-07-24 20:02:42.601605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.834 [2024-07-24 20:02:42.601612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.834 [2024-07-24 20:02:42.601620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc4daf0 is same with the state(5) to be set 00:22:54.834 [2024-07-24 20:02:42.601642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.834 [2024-07-24 20:02:42.601650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.834 [2024-07-24 20:02:42.601659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.834 [2024-07-24 20:02:42.601667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.834 [2024-07-24 20:02:42.601675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.834 [2024-07-24 20:02:42.601685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.834 [2024-07-24 20:02:42.601693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.834 [2024-07-24 20:02:42.601700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.834 [2024-07-24 20:02:42.601707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf5250 is same with the state(5) to be set 00:22:54.834 [2024-07-24 20:02:42.601726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.835 [2024-07-24 20:02:42.601735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.835 [2024-07-24 20:02:42.601744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.835 [2024-07-24 20:02:42.601752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.835 [2024-07-24 20:02:42.601760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.835 [2024-07-24 20:02:42.601767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.835 [2024-07-24 20:02:42.601775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.835 [2024-07-24 20:02:42.601783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.835 [2024-07-24 20:02:42.601791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdef480 is same with the state(5) to be set 00:22:54.835 [2024-07-24 20:02:42.601816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.835 [2024-07-24 20:02:42.601825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.835 [2024-07-24 20:02:42.601834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.835 [2024-07-24 20:02:42.601841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.835 [2024-07-24 20:02:42.601849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.835 [2024-07-24 20:02:42.601857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.835 [2024-07-24 20:02:42.601865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.835 [2024-07-24 20:02:42.601872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.835 [2024-07-24 20:02:42.601879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77b340 is same with the state(5) to be set 00:22:54.835 [2024-07-24 20:02:42.603197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.835 [2024-07-24 20:02:42.603231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.835 [2024-07-24 20:02:42.603244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.835 [2024-07-24 20:02:42.603254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.835 [2024-07-24 20:02:42.603269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.835 [2024-07-24 20:02:42.603278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.835 [2024-07-24 20:02:42.603290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.835 [2024-07-24 20:02:42.603299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.835 [2024-07-24 20:02:42.603311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.835 [2024-07-24 20:02:42.603320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.835 [2024-07-24 20:02:42.603331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.835 [2024-07-24 20:02:42.603340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.835 [2024-07-24 20:02:42.603351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.835 [2024-07-24 20:02:42.603359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.835 [2024-07-24 20:02:42.603369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.835 [2024-07-24 20:02:42.603376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.835 [2024-07-24 20:02:42.603386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.835 [2024-07-24 20:02:42.603394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.835 [2024-07-24 20:02:42.603403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.835 [2024-07-24 20:02:42.603412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.835 [2024-07-24 20:02:42.603421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.835 [2024-07-24 20:02:42.603429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.835 [2024-07-24 20:02:42.603439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.835 [2024-07-24 20:02:42.603446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.835 [2024-07-24 20:02:42.603456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.835 [2024-07-24 20:02:42.603463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.835 [2024-07-24 20:02:42.603472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.835 [2024-07-24 20:02:42.603480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.835 [2024-07-24 20:02:42.603490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.835 [2024-07-24 20:02:42.603500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.835 [2024-07-24 20:02:42.603509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.835 [2024-07-24 20:02:42.603517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.835 [2024-07-24 20:02:42.603526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.835 [2024-07-24 20:02:42.603534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.835 [2024-07-24 20:02:42.603543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.835 [2024-07-24 20:02:42.603551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.835 [2024-07-24 20:02:42.603561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.835 [2024-07-24 20:02:42.603569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.835 [2024-07-24 20:02:42.603579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.835 [2024-07-24 20:02:42.603586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.835 [2024-07-24 20:02:42.603597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.835 [2024-07-24 20:02:42.603605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.835 [2024-07-24 20:02:42.603614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.835 [2024-07-24 20:02:42.603622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.835 [2024-07-24 20:02:42.603631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.835 [2024-07-24 20:02:42.603639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.835 [2024-07-24 20:02:42.603649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.835 [2024-07-24 20:02:42.603657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.835 [2024-07-24 20:02:42.603667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.835 [2024-07-24 20:02:42.603676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.835 [2024-07-24 20:02:42.603686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.835 [2024-07-24 20:02:42.603694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.835 [2024-07-24 20:02:42.603704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.835 [2024-07-24 20:02:42.603712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.835 [2024-07-24 20:02:42.603725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.835 [2024-07-24 20:02:42.603732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.835 [2024-07-24 20:02:42.603743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.835 [2024-07-24 20:02:42.603751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.835 [2024-07-24 20:02:42.603762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.835 [2024-07-24 20:02:42.603770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.835 [2024-07-24 20:02:42.603780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.836 [2024-07-24 20:02:42.603789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.836 [2024-07-24 20:02:42.603799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.836 [2024-07-24 20:02:42.603807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.836 [2024-07-24 20:02:42.603817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.836 [2024-07-24 20:02:42.603825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.836 [2024-07-24 20:02:42.603835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.836 [2024-07-24 20:02:42.603843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.836 [2024-07-24 20:02:42.603854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.836 [2024-07-24 20:02:42.603861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.836 [2024-07-24 20:02:42.603871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.836 [2024-07-24 20:02:42.603878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.836 [2024-07-24 20:02:42.603890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.836 [2024-07-24 20:02:42.603897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.836 [2024-07-24 20:02:42.603907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.836 [2024-07-24 20:02:42.603916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.836 [2024-07-24 20:02:42.603926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.836 [2024-07-24 20:02:42.603934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.836 [2024-07-24 20:02:42.603943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.836 [2024-07-24 20:02:42.603953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.836 [2024-07-24 20:02:42.603963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.836 [2024-07-24 20:02:42.603971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.836 [2024-07-24 20:02:42.603981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.836 [2024-07-24 20:02:42.603989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.836 [2024-07-24 20:02:42.603999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.836 [2024-07-24 20:02:42.604006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.836 [2024-07-24 20:02:42.604016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.836 [2024-07-24 20:02:42.604023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.836 [2024-07-24 20:02:42.604032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.836 [2024-07-24 20:02:42.604041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.836 [2024-07-24 20:02:42.604051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.836 [2024-07-24 20:02:42.604058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.836 [2024-07-24 20:02:42.604068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.836 [2024-07-24 20:02:42.604076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.836 [2024-07-24 20:02:42.604086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.836 [2024-07-24 20:02:42.604094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.836 [2024-07-24 20:02:42.604103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.836 [2024-07-24 20:02:42.604112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.836 [2024-07-24 20:02:42.604121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.836 [2024-07-24 20:02:42.604128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.836 [2024-07-24 20:02:42.604138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.836 [2024-07-24 20:02:42.604145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.836 [2024-07-24 20:02:42.604154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.836 [2024-07-24 20:02:42.604162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.836 [2024-07-24 20:02:42.604173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.836 [2024-07-24 20:02:42.604183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.836 [2024-07-24 20:02:42.604195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.836 [2024-07-24 20:02:42.604207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.836 [2024-07-24 20:02:42.604218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.836 [2024-07-24 20:02:42.604225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.836 [2024-07-24 20:02:42.604235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.836 [2024-07-24 20:02:42.604243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.836 [2024-07-24 20:02:42.604252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.836 [2024-07-24 20:02:42.604261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.836 [2024-07-24 20:02:42.604271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.836 [2024-07-24 20:02:42.604278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.836 [2024-07-24 20:02:42.604288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.836 [2024-07-24 20:02:42.604296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.836 [2024-07-24 20:02:42.604305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.836 [2024-07-24 20:02:42.604314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.836 [2024-07-24 20:02:42.604323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.836 [2024-07-24 20:02:42.604331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.836 [2024-07-24 20:02:42.604341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.836 [2024-07-24 20:02:42.604348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.836 [2024-07-24 20:02:42.604358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.836 [2024-07-24 20:02:42.604365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.836 [2024-07-24 20:02:42.604376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.836 [2024-07-24 20:02:42.604384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.836 [2024-07-24 20:02:42.604434] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd9ad20 was disconnected and freed. reset controller. 00:22:54.836 [2024-07-24 20:02:42.605148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.836 [2024-07-24 20:02:42.605165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.836 [2024-07-24 20:02:42.605177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.836 [2024-07-24 20:02:42.605186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.836 [2024-07-24 20:02:42.605196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.836 [2024-07-24 20:02:42.605210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.836 [2024-07-24 20:02:42.605221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.836 [2024-07-24 20:02:42.605229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.836 [2024-07-24 20:02:42.605240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.836 [2024-07-24 20:02:42.605247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.836 [2024-07-24 20:02:42.605257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.837 [2024-07-24 20:02:42.605265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.837 [2024-07-24 20:02:42.605275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.837 [2024-07-24 20:02:42.605282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.837 [2024-07-24 20:02:42.605292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.837 [2024-07-24 20:02:42.605299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.837 [2024-07-24 20:02:42.605309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.837 [2024-07-24 20:02:42.605317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.837 [2024-07-24 20:02:42.605327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.837 [2024-07-24 20:02:42.605335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.837 [2024-07-24 20:02:42.605345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.837 [2024-07-24 20:02:42.605352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.837 [2024-07-24 20:02:42.605362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.837 [2024-07-24 20:02:42.605370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.837 [2024-07-24 20:02:42.605379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.837 [2024-07-24 20:02:42.605389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.837 [2024-07-24 20:02:42.605399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.837 [2024-07-24 20:02:42.605406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.837 [2024-07-24 20:02:42.605416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.837 [2024-07-24 20:02:42.605423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.837 [2024-07-24 20:02:42.605432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.837 [2024-07-24 20:02:42.605439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.837 [2024-07-24 20:02:42.605448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.837 [2024-07-24 20:02:42.605456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.837 [2024-07-24 20:02:42.605465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.837 [2024-07-24 20:02:42.605472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.837 [2024-07-24 20:02:42.605482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.837 [2024-07-24 20:02:42.605489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.837 [2024-07-24 20:02:42.605498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.837 [2024-07-24 20:02:42.605505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.837 [2024-07-24 20:02:42.605515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.837 [2024-07-24 20:02:42.605522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.837 [2024-07-24 20:02:42.605531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.837 [2024-07-24 20:02:42.605538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.837 [2024-07-24 20:02:42.605548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.837 [2024-07-24 20:02:42.605555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.837 [2024-07-24 20:02:42.605564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.837 [2024-07-24 20:02:42.605571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.837 [2024-07-24 20:02:42.605581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.837 [2024-07-24 20:02:42.605588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.837 [2024-07-24 20:02:42.605602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.837 [2024-07-24 20:02:42.605612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.837 [2024-07-24 20:02:42.605621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.837 [2024-07-24 20:02:42.605628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.837 [2024-07-24 20:02:42.605637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.837 [2024-07-24 20:02:42.605644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.837 [2024-07-24 20:02:42.605653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.837 [2024-07-24 20:02:42.605660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.837 [2024-07-24 20:02:42.605669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.837 [2024-07-24 20:02:42.605676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.837 [2024-07-24 20:02:42.605685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.837 [2024-07-24 20:02:42.605692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.837 [2024-07-24 20:02:42.605701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.837 [2024-07-24 20:02:42.605708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.837 [2024-07-24 20:02:42.605717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.837 [2024-07-24 20:02:42.605724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.837 [2024-07-24 20:02:42.605733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.837 [2024-07-24 20:02:42.605740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.837 [2024-07-24 20:02:42.605750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.837 [2024-07-24 20:02:42.605757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.837 [2024-07-24 20:02:42.605766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.837 [2024-07-24 20:02:42.605773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.837 [2024-07-24 20:02:42.605782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.837 [2024-07-24 20:02:42.605789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.837 [2024-07-24 20:02:42.605798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.837 [2024-07-24 20:02:42.605806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.837 [2024-07-24 20:02:42.605816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.837 [2024-07-24 20:02:42.605823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.837 [2024-07-24 20:02:42.605832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.837 [2024-07-24 20:02:42.605839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.837 [2024-07-24 20:02:42.605847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.837 [2024-07-24 20:02:42.605854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.837 [2024-07-24 20:02:42.605863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.837 [2024-07-24 20:02:42.605870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.837 [2024-07-24 20:02:42.605879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.837 [2024-07-24 20:02:42.605886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.837 [2024-07-24 20:02:42.605895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.837 [2024-07-24 20:02:42.605902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.837 [2024-07-24 20:02:42.605911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.837 [2024-07-24 20:02:42.605919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.838 [2024-07-24 20:02:42.605928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.838 [2024-07-24 20:02:42.605935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.838 [2024-07-24 20:02:42.605944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.838 [2024-07-24 20:02:42.605950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.838 [2024-07-24 20:02:42.605960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.838 [2024-07-24 20:02:42.605967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.838 [2024-07-24 20:02:42.605975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.838 [2024-07-24 20:02:42.605983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.838 [2024-07-24 20:02:42.605991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.838 [2024-07-24 20:02:42.605998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.838 [2024-07-24 20:02:42.606009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.838 [2024-07-24 20:02:42.606015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.838 [2024-07-24 20:02:42.606024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.838 [2024-07-24 20:02:42.606032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.838 [2024-07-24 20:02:42.606041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.838 [2024-07-24 20:02:42.606048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.838 [2024-07-24 20:02:42.606057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.838 [2024-07-24 20:02:42.606064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.838 [2024-07-24 20:02:42.606073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.838 [2024-07-24 20:02:42.606080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.838 [2024-07-24 20:02:42.606089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.838 [2024-07-24 20:02:42.606096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.838 [2024-07-24 20:02:42.606105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.838 [2024-07-24 20:02:42.606112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.838 [2024-07-24 20:02:42.606121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.838 [2024-07-24 20:02:42.606130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.838 [2024-07-24 20:02:42.606140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.838 [2024-07-24 20:02:42.606148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.838 [2024-07-24 20:02:42.606159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.838 [2024-07-24 20:02:42.606167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.838 [2024-07-24 20:02:42.606177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.838 [2024-07-24 20:02:42.606186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.838 [2024-07-24 20:02:42.606195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.838 [2024-07-24 20:02:42.606208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.838 [2024-07-24 20:02:42.606217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.838 [2024-07-24 20:02:42.606226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.838 [2024-07-24 20:02:42.606235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.838 [2024-07-24 20:02:42.606242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.838 [2024-07-24 20:02:42.606263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:54.838 [2024-07-24 20:02:42.606303] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd5ffb0 was disconnected and freed. reset controller. 00:22:54.838 [2024-07-24 20:02:42.606330] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:54.838 [2024-07-24 20:02:42.606349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf5250 (9): Bad file descriptor 00:22:54.838 [2024-07-24 20:02:42.609237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:54.838 [2024-07-24 20:02:42.609262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc4daf0 (9): Bad file descriptor 00:22:54.838 [2024-07-24 20:02:42.609785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.838 [2024-07-24 20:02:42.609802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd86d00 with addr=10.0.0.2, port=4420 00:22:54.838 [2024-07-24 20:02:42.609810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86d00 is same with the state(5) to be set 00:22:54.838 [2024-07-24 20:02:42.609860] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:54.838 [2024-07-24 20:02:42.610358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:54.838 [2024-07-24 20:02:42.610379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdef480 (9): Bad file descriptor 00:22:54.838 [2024-07-24 20:02:42.610834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.838 [2024-07-24 20:02:42.610848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf5250 with addr=10.0.0.2, port=4420 00:22:54.838 [2024-07-24 20:02:42.610856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf5250 is same with the state(5) to be set 00:22:54.838 [2024-07-24 20:02:42.610873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd86d00 (9): Bad file descriptor 00:22:54.838 [2024-07-24 20:02:42.610941] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:54.838 [2024-07-24 20:02:42.611247] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:54.838 [2024-07-24 20:02:42.612108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.838 [2024-07-24 20:02:42.612126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4daf0 with addr=10.0.0.2, port=4420 00:22:54.838 [2024-07-24 20:02:42.612135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc4daf0 is same with the state(5) to be set 00:22:54.838 [2024-07-24 20:02:42.612154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf5250 (9): Bad file descriptor 00:22:54.838 [2024-07-24 20:02:42.612164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:54.838 [2024-07-24 20:02:42.612170] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:54.838 [2024-07-24 20:02:42.612180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:54.838 [2024-07-24 20:02:42.612207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdcd1a0 (9): Bad file descriptor 00:22:54.838 [2024-07-24 20:02:42.612225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdccf00 (9): Bad file descriptor 00:22:54.838 [2024-07-24 20:02:42.612249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdef770 (9): Bad file descriptor 00:22:54.838 [2024-07-24 20:02:42.612267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36e30 (9): Bad file descriptor 00:22:54.838 [2024-07-24 20:02:42.612287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x77b340 (9): Bad file descriptor 00:22:54.838 [2024-07-24 20:02:42.612378] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:54.838 [2024-07-24 20:02:42.612419] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:54.839 [2024-07-24 20:02:42.612457] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:54.839 [2024-07-24 20:02:42.612478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:54.839 [2024-07-24 20:02:42.612928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.839 [2024-07-24 20:02:42.612941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdef480 with addr=10.0.0.2, port=4420 00:22:54.839 [2024-07-24 20:02:42.612949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdef480 is same with the state(5) to be set 00:22:54.839 [2024-07-24 20:02:42.612960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc4daf0 (9): Bad file descriptor 00:22:54.839 [2024-07-24 20:02:42.612968] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:54.839 [2024-07-24 20:02:42.612975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:54.839 [2024-07-24 20:02:42.612983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:54.839 [2024-07-24 20:02:42.613018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.839 [2024-07-24 20:02:42.613028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.839 [2024-07-24 20:02:42.613041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.839 [2024-07-24 20:02:42.613050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.839 [2024-07-24 20:02:42.613060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.839 [2024-07-24 20:02:42.613068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.839 [2024-07-24 20:02:42.613079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.839 [2024-07-24 20:02:42.613086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.839 [2024-07-24 20:02:42.613096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.839 [2024-07-24 20:02:42.613103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.839 [2024-07-24 20:02:42.613114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.839 [2024-07-24 20:02:42.613121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.839 [2024-07-24 20:02:42.613132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.839 [2024-07-24 20:02:42.613140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.839 [2024-07-24 20:02:42.613154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.839 [2024-07-24 20:02:42.613161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.839 [2024-07-24 20:02:42.613171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.839 [2024-07-24 20:02:42.613179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.839 [2024-07-24 20:02:42.613189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.839 [2024-07-24 20:02:42.613197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.839 [2024-07-24 20:02:42.613215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.839 [2024-07-24 20:02:42.613223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.839 [2024-07-24 20:02:42.613234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.839 [2024-07-24 20:02:42.613242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.839 [2024-07-24 20:02:42.613252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.839 [2024-07-24 20:02:42.613260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.839 [2024-07-24 20:02:42.613270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.839 [2024-07-24 20:02:42.613279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.839 [2024-07-24 20:02:42.613289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.839 [2024-07-24 20:02:42.613297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.839 [2024-07-24 20:02:42.613307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.839 [2024-07-24 20:02:42.613315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.839 [2024-07-24 20:02:42.613325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.839 [2024-07-24 20:02:42.613333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.839 [2024-07-24 20:02:42.613343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.839 [2024-07-24 20:02:42.613352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.839 [2024-07-24 20:02:42.613361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.839 [2024-07-24 20:02:42.613370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.839 [2024-07-24 20:02:42.613380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.839 [2024-07-24 20:02:42.613390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.839 [2024-07-24 20:02:42.613400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.839 [2024-07-24 20:02:42.613407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.839 [2024-07-24 20:02:42.613417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.839 [2024-07-24 20:02:42.613424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.839 [2024-07-24 20:02:42.613435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.839 [2024-07-24 20:02:42.613443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.839 [2024-07-24 20:02:42.613453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.839 [2024-07-24 20:02:42.613461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.839 [2024-07-24 20:02:42.613471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.839 [2024-07-24 20:02:42.613479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.839 [2024-07-24 20:02:42.613489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.839 [2024-07-24 20:02:42.613497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.839 [2024-07-24 20:02:42.613507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.839 [2024-07-24 20:02:42.613515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.839 [2024-07-24 20:02:42.613525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.839 [2024-07-24 20:02:42.613533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.839 [2024-07-24 20:02:42.613542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.839 [2024-07-24 20:02:42.613550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.839 [2024-07-24 20:02:42.613560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.839 [2024-07-24 20:02:42.613567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.839 [2024-07-24 20:02:42.613577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.839 [2024-07-24 20:02:42.613584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.839 [2024-07-24 20:02:42.613594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.839 [2024-07-24 20:02:42.613602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.839 [2024-07-24 20:02:42.613613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.839 [2024-07-24 20:02:42.613622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.839 [2024-07-24 20:02:42.613632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.839 [2024-07-24 20:02:42.613639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.839 [2024-07-24 20:02:42.613649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.839 [2024-07-24 20:02:42.613657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.840 [2024-07-24 20:02:42.613666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.840 [2024-07-24 20:02:42.613674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.840 [2024-07-24 20:02:42.613684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.840 [2024-07-24 20:02:42.613692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.840 [2024-07-24 20:02:42.613702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.840 [2024-07-24 20:02:42.613710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.840 [2024-07-24 20:02:42.613720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.840 [2024-07-24 20:02:42.613727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.840 [2024-07-24 20:02:42.613737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.840 [2024-07-24 20:02:42.613745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.840 [2024-07-24 20:02:42.613755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.840 [2024-07-24 20:02:42.613762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.840 [2024-07-24 20:02:42.613772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.840 [2024-07-24 20:02:42.613780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.840 [2024-07-24 20:02:42.613790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.840 [2024-07-24 20:02:42.613797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.840 [2024-07-24 20:02:42.613807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.840 [2024-07-24 20:02:42.613814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.840 [2024-07-24 20:02:42.613824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.840 [2024-07-24 20:02:42.613834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.840 [2024-07-24 20:02:42.613844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.840 [2024-07-24 20:02:42.613852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.840 [2024-07-24 20:02:42.613861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.840 [2024-07-24 20:02:42.613869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.840 [2024-07-24 20:02:42.613879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.840 [2024-07-24 20:02:42.613887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.840 [2024-07-24 20:02:42.613896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.840 [2024-07-24 20:02:42.613904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.840 [2024-07-24 20:02:42.613913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.840 [2024-07-24 20:02:42.613921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.840 [2024-07-24 20:02:42.613931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.840 [2024-07-24 20:02:42.613938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.840 [2024-07-24 20:02:42.613948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.840 [2024-07-24 20:02:42.613956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.840 [2024-07-24 20:02:42.613966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.840 [2024-07-24 20:02:42.613976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.840 [2024-07-24 20:02:42.613986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.840 [2024-07-24 20:02:42.613995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.840 [2024-07-24 20:02:42.614006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.840 [2024-07-24 20:02:42.614015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.840 [2024-07-24 20:02:42.614025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.840 [2024-07-24 20:02:42.614036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.840 [2024-07-24 20:02:42.614047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.840 [2024-07-24 20:02:42.614057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.840 [2024-07-24 20:02:42.614071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.840 [2024-07-24 20:02:42.614081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.840 [2024-07-24 20:02:42.614091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.840 [2024-07-24 20:02:42.614099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.840 [2024-07-24 20:02:42.614109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.840 [2024-07-24 20:02:42.614118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.840 [2024-07-24 20:02:42.614129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.840 [2024-07-24 20:02:42.614137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.840 [2024-07-24 20:02:42.614147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.840 [2024-07-24 20:02:42.614156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.840 [2024-07-24 20:02:42.614165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.840 [2024-07-24 20:02:42.614173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.840 [2024-07-24 20:02:42.614183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.840 [2024-07-24 20:02:42.614191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.840 [2024-07-24 20:02:42.614199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96f40 is same with the state(5) to be set 00:22:54.840 [2024-07-24 20:02:42.615498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:54.840 [2024-07-24 20:02:42.615513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:54.840 [2024-07-24 20:02:42.615534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdef480 (9): Bad file descriptor 00:22:54.840 [2024-07-24 20:02:42.615545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:54.840 [2024-07-24 20:02:42.615554] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:54.840 [2024-07-24 20:02:42.615563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:54.840 [2024-07-24 20:02:42.615618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:54.840 [2024-07-24 20:02:42.616041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.840 [2024-07-24 20:02:42.616054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc295d0 with addr=10.0.0.2, port=4420 00:22:54.840 [2024-07-24 20:02:42.616063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc295d0 is same with the state(5) to be set 00:22:54.840 [2024-07-24 20:02:42.616072] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:54.840 [2024-07-24 20:02:42.616078] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:54.840 [2024-07-24 20:02:42.616089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:54.840 [2024-07-24 20:02:42.616392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:54.840 [2024-07-24 20:02:42.616404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc295d0 (9): Bad file descriptor 00:22:54.840 [2024-07-24 20:02:42.616454] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:54.840 [2024-07-24 20:02:42.616462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:54.840 [2024-07-24 20:02:42.616468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:54.840 [2024-07-24 20:02:42.616508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:54.840 [2024-07-24 20:02:42.616518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:54.840 [2024-07-24 20:02:42.616986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.840 [2024-07-24 20:02:42.616999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd86d00 with addr=10.0.0.2, port=4420 00:22:54.840 [2024-07-24 20:02:42.617007] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86d00 is same with the state(5) to be set 00:22:54.840 [2024-07-24 20:02:42.617042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd86d00 (9): Bad file descriptor 00:22:54.841 [2024-07-24 20:02:42.617074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:54.841 [2024-07-24 20:02:42.617082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:54.841 [2024-07-24 20:02:42.617089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:54.841 [2024-07-24 20:02:42.617124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:54.841 [2024-07-24 20:02:42.619911] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:54.841 [2024-07-24 20:02:42.620467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.841 [2024-07-24 20:02:42.620509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf5250 with addr=10.0.0.2, port=4420 00:22:54.841 [2024-07-24 20:02:42.620520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf5250 is same with the state(5) to be set 00:22:54.841 [2024-07-24 20:02:42.620569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf5250 (9): Bad file descriptor 00:22:54.841 [2024-07-24 20:02:42.620604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:54.841 [2024-07-24 20:02:42.620612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:54.841 [2024-07-24 20:02:42.620620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:54.841 [2024-07-24 20:02:42.620659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:54.841 [2024-07-24 20:02:42.620961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:54.841 [2024-07-24 20:02:42.621565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.841 [2024-07-24 20:02:42.621604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4daf0 with addr=10.0.0.2, port=4420 00:22:54.841 [2024-07-24 20:02:42.621615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc4daf0 is same with the state(5) to be set 00:22:54.841 [2024-07-24 20:02:42.621661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc4daf0 (9): Bad file descriptor 00:22:54.841 [2024-07-24 20:02:42.621700] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:54.841 [2024-07-24 20:02:42.621708] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:54.841 [2024-07-24 20:02:42.621716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:54.841 [2024-07-24 20:02:42.621792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:54.841 [2024-07-24 20:02:42.621855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.841 [2024-07-24 20:02:42.621867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.841 [2024-07-24 20:02:42.621883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.841 [2024-07-24 20:02:42.621892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.841 [2024-07-24 20:02:42.621903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.841 [2024-07-24 20:02:42.621911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.841 [2024-07-24 20:02:42.621922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.841 [2024-07-24 20:02:42.621930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.841 [2024-07-24 20:02:42.621940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.841 [2024-07-24 20:02:42.621948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.841 [2024-07-24 20:02:42.621958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.841 [2024-07-24 20:02:42.621966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.841 [2024-07-24 20:02:42.621976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.841 [2024-07-24 20:02:42.621983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.841 [2024-07-24 20:02:42.621993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.841 [2024-07-24 20:02:42.622001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.841 [2024-07-24 20:02:42.622010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.841 [2024-07-24 20:02:42.622018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.841 [2024-07-24 20:02:42.622028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.841 [2024-07-24 20:02:42.622036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.841 [2024-07-24 20:02:42.622045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.841 [2024-07-24 20:02:42.622053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.841 [2024-07-24 20:02:42.622066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.841 [2024-07-24 20:02:42.622074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.841 [2024-07-24 20:02:42.622085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.841 [2024-07-24 20:02:42.622093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.841 [2024-07-24 20:02:42.622103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.841 [2024-07-24 20:02:42.622111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.841 [2024-07-24 20:02:42.622121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.841 [2024-07-24 20:02:42.622129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.841 [2024-07-24 20:02:42.622139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.841 [2024-07-24 20:02:42.622147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.841 [2024-07-24 20:02:42.622157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.841 [2024-07-24 20:02:42.622164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.841 [2024-07-24 20:02:42.622175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.841 [2024-07-24 20:02:42.622183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.841 [2024-07-24 20:02:42.622193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.841 [2024-07-24 20:02:42.622208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.841 [2024-07-24 20:02:42.622219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.841 [2024-07-24 20:02:42.622227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.841 [2024-07-24 20:02:42.622236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.841 [2024-07-24 20:02:42.622244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.841 [2024-07-24 20:02:42.622253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.841 [2024-07-24 20:02:42.622261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.841 [2024-07-24 20:02:42.622271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.841 [2024-07-24 20:02:42.622279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.841 [2024-07-24 20:02:42.622289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.841 [2024-07-24 20:02:42.622298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.841 [2024-07-24 20:02:42.622308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.841 [2024-07-24 20:02:42.622316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.841 [2024-07-24 20:02:42.622325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.841 [2024-07-24 20:02:42.622333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.841 [2024-07-24 20:02:42.622343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.841 [2024-07-24 20:02:42.622351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.841 [2024-07-24 20:02:42.622361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.841 [2024-07-24 20:02:42.622370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.841 [2024-07-24 20:02:42.622379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.841 [2024-07-24 20:02:42.622387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.841 [2024-07-24 20:02:42.622398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.841 [2024-07-24 20:02:42.622405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.842 [2024-07-24 20:02:42.622415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.842 [2024-07-24 20:02:42.622423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.842 [2024-07-24 20:02:42.622433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.842 [2024-07-24 20:02:42.622440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.842 [2024-07-24 20:02:42.622450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.842 [2024-07-24 20:02:42.622459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.842 [2024-07-24 20:02:42.622469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.842 [2024-07-24 20:02:42.622477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.842 [2024-07-24 20:02:42.622487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.842 [2024-07-24 20:02:42.622495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.842 [2024-07-24 20:02:42.622505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.842 [2024-07-24 20:02:42.622513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.842 [2024-07-24 20:02:42.622524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.842 [2024-07-24 20:02:42.622533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.842 [2024-07-24 20:02:42.622543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.842 [2024-07-24 20:02:42.622550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.842 [2024-07-24 20:02:42.622560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.842 [2024-07-24 20:02:42.622568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.842 [2024-07-24 20:02:42.622578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.842 [2024-07-24 20:02:42.622587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.842 [2024-07-24 20:02:42.622597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.842 [2024-07-24 20:02:42.622605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.842 [2024-07-24 20:02:42.622615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.842 [2024-07-24 20:02:42.622623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.842 [2024-07-24 20:02:42.622633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.842 [2024-07-24 20:02:42.622641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.842 [2024-07-24 20:02:42.622651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.842 [2024-07-24 20:02:42.622658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.842 [2024-07-24 20:02:42.622669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.842 [2024-07-24 20:02:42.622677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.842 [2024-07-24 20:02:42.622687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.842 [2024-07-24 20:02:42.622695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.842 [2024-07-24 20:02:42.622705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.842 [2024-07-24 20:02:42.622714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.842 [2024-07-24 20:02:42.622724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.842 [2024-07-24 20:02:42.622733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.842 [2024-07-24 20:02:42.622742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.842 [2024-07-24 20:02:42.622755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.842 [2024-07-24 20:02:42.622765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.842 [2024-07-24 20:02:42.622774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.842 [2024-07-24 20:02:42.622783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.842 [2024-07-24 20:02:42.622792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.842 [2024-07-24 20:02:42.622802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.842 [2024-07-24 20:02:42.622809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.842 [2024-07-24 20:02:42.622819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.842 [2024-07-24 20:02:42.622827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.842 [2024-07-24 20:02:42.622838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.842 [2024-07-24 20:02:42.622846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.842 [2024-07-24 20:02:42.622856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.842 [2024-07-24 20:02:42.622864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.842 [2024-07-24 20:02:42.622874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.842 [2024-07-24 20:02:42.622882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.842 [2024-07-24 20:02:42.622892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.842 [2024-07-24 20:02:42.622900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.842 [2024-07-24 20:02:42.622910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.842 [2024-07-24 20:02:42.622918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.842 [2024-07-24 20:02:42.622928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.842 [2024-07-24 20:02:42.622936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.842 [2024-07-24 20:02:42.622946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.842 [2024-07-24 20:02:42.622953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.842 [2024-07-24 20:02:42.622963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.842 [2024-07-24 20:02:42.622971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.842 [2024-07-24 20:02:42.622981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.842 [2024-07-24 20:02:42.622991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.842 [2024-07-24 20:02:42.623001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.842 [2024-07-24 20:02:42.623009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.842 [2024-07-24 20:02:42.623018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.842 [2024-07-24 20:02:42.623027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.842 [2024-07-24 20:02:42.623035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd997e0 is same with the state(5) to be set 00:22:54.842 [2024-07-24 20:02:42.624331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.842 [2024-07-24 20:02:42.624346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.842 [2024-07-24 20:02:42.624360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.842 [2024-07-24 20:02:42.624369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.842 [2024-07-24 20:02:42.624381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.842 [2024-07-24 20:02:42.624391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.842 [2024-07-24 20:02:42.624402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.842 [2024-07-24 20:02:42.624412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.842 [2024-07-24 20:02:42.624424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.842 [2024-07-24 20:02:42.624433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.843 [2024-07-24 20:02:42.624443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.843 [2024-07-24 20:02:42.624452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.843 [2024-07-24 20:02:42.624462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.843 [2024-07-24 20:02:42.624470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.843 [2024-07-24 20:02:42.624480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.843 [2024-07-24 20:02:42.624488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.843 [2024-07-24 20:02:42.624498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.843 [2024-07-24 20:02:42.624507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.843 [2024-07-24 20:02:42.624516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.843 [2024-07-24 20:02:42.624527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.843 [2024-07-24 20:02:42.624537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.843 [2024-07-24 20:02:42.624545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.843 [2024-07-24 20:02:42.624556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.843 [2024-07-24 20:02:42.624565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.843 [2024-07-24 20:02:42.624574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.843 [2024-07-24 20:02:42.624583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.843 [2024-07-24 20:02:42.624593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.843 [2024-07-24 20:02:42.624602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.843 [2024-07-24 20:02:42.624613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.843 [2024-07-24 20:02:42.624621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.843 [2024-07-24 20:02:42.624631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.843 [2024-07-24 20:02:42.624640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.843 [2024-07-24 20:02:42.624650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.843 [2024-07-24 20:02:42.624657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.843 [2024-07-24 20:02:42.624667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.843 [2024-07-24 20:02:42.624675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.843 [2024-07-24 20:02:42.624685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.843 [2024-07-24 20:02:42.624693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.843 [2024-07-24 20:02:42.624702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.843 [2024-07-24 20:02:42.624710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.843 [2024-07-24 20:02:42.624720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.843 [2024-07-24 20:02:42.624728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.843 [2024-07-24 20:02:42.624738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.843 [2024-07-24 20:02:42.624746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.843 [2024-07-24 20:02:42.624757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.843 [2024-07-24 20:02:42.624765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.843 [2024-07-24 20:02:42.624775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.843 [2024-07-24 20:02:42.624783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.843 [2024-07-24 20:02:42.624793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.843 [2024-07-24 20:02:42.624800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.843 [2024-07-24 20:02:42.624810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.843 [2024-07-24 20:02:42.624818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.843 [2024-07-24 20:02:42.624828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.843 [2024-07-24 20:02:42.624836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.843 [2024-07-24 20:02:42.624846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.843 [2024-07-24 20:02:42.624853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.843 [2024-07-24 20:02:42.624864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.843 [2024-07-24 20:02:42.624871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.843 [2024-07-24 20:02:42.624881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.843 [2024-07-24 20:02:42.624889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.843 [2024-07-24 20:02:42.624899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.843 [2024-07-24 20:02:42.624907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.843 [2024-07-24 20:02:42.624918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.843 [2024-07-24 20:02:42.624925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.843 [2024-07-24 20:02:42.624934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.843 [2024-07-24 20:02:42.624943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.843 [2024-07-24 20:02:42.624952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.843 [2024-07-24 20:02:42.624961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.843 [2024-07-24 20:02:42.624970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.843 [2024-07-24 20:02:42.624980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.843 [2024-07-24 20:02:42.624991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.843 [2024-07-24 20:02:42.624998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.843 [2024-07-24 20:02:42.625009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.843 [2024-07-24 20:02:42.625017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.843 [2024-07-24 20:02:42.625027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.843 [2024-07-24 20:02:42.625035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.843 [2024-07-24 20:02:42.625044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.844 [2024-07-24 20:02:42.625052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.844 [2024-07-24 20:02:42.625062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.844 [2024-07-24 20:02:42.625070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.844 [2024-07-24 20:02:42.625080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.844 [2024-07-24 20:02:42.625087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.844 [2024-07-24 20:02:42.625098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.844 [2024-07-24 20:02:42.625106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.844 [2024-07-24 20:02:42.625116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.844 [2024-07-24 20:02:42.625123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.844 [2024-07-24 20:02:42.625134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.844 [2024-07-24 20:02:42.625142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.844 [2024-07-24 20:02:42.625152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.844 [2024-07-24 20:02:42.625160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.844 [2024-07-24 20:02:42.625171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.844 [2024-07-24 20:02:42.625178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.844 [2024-07-24 20:02:42.625189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.844 [2024-07-24 20:02:42.625197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.844 [2024-07-24 20:02:42.625219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.844 [2024-07-24 20:02:42.625227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.844 [2024-07-24 20:02:42.625238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.844 [2024-07-24 20:02:42.625246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.844 [2024-07-24 20:02:42.625256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.844 [2024-07-24 20:02:42.625264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.844 [2024-07-24 20:02:42.625274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.844 [2024-07-24 20:02:42.625282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.844 [2024-07-24 20:02:42.625292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.844 [2024-07-24 20:02:42.625300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.844 [2024-07-24 20:02:42.625310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.844 [2024-07-24 20:02:42.625317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.844 [2024-07-24 20:02:42.625328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.844 [2024-07-24 20:02:42.625335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.844 [2024-07-24 20:02:42.625344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.844 [2024-07-24 20:02:42.625352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.844 [2024-07-24 20:02:42.625363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.844 [2024-07-24 20:02:42.625371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.844 [2024-07-24 20:02:42.625382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.844 [2024-07-24 20:02:42.625391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.844 [2024-07-24 20:02:42.625402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.844 [2024-07-24 20:02:42.625410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.844 [2024-07-24 20:02:42.625422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.844 [2024-07-24 20:02:42.625431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.844 [2024-07-24 20:02:42.625442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.844 [2024-07-24 20:02:42.625453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.844 [2024-07-24 20:02:42.625465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.844 [2024-07-24 20:02:42.625473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.844 [2024-07-24 20:02:42.625484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.844 [2024-07-24 20:02:42.625494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.844 [2024-07-24 20:02:42.625504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.844 [2024-07-24 20:02:42.625512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.844 [2024-07-24 20:02:42.625522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.844 [2024-07-24 20:02:42.625530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.844 [2024-07-24 20:02:42.625539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc25270 is same with the state(5) to be set 00:22:54.844 [2024-07-24 20:02:42.626802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.844 [2024-07-24 20:02:42.626817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.844 [2024-07-24 20:02:42.626829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.844 [2024-07-24 20:02:42.626838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.844 [2024-07-24 20:02:42.626850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.844 [2024-07-24 20:02:42.626859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.844 [2024-07-24 20:02:42.626870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.844 [2024-07-24 20:02:42.626880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.844 [2024-07-24 20:02:42.626891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.844 [2024-07-24 20:02:42.626900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.844 [2024-07-24 20:02:42.626911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.844 [2024-07-24 20:02:42.626920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.844 [2024-07-24 20:02:42.626931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.844 [2024-07-24 20:02:42.626940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.844 [2024-07-24 20:02:42.626951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.844 [2024-07-24 20:02:42.626960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.844 [2024-07-24 20:02:42.626971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.844 [2024-07-24 20:02:42.626979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.844 [2024-07-24 20:02:42.626989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.844 [2024-07-24 20:02:42.626996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.844 [2024-07-24 20:02:42.627006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.844 [2024-07-24 20:02:42.627013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.844 [2024-07-24 20:02:42.627023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.844 [2024-07-24 20:02:42.627031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.844 [2024-07-24 20:02:42.627041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.844 [2024-07-24 20:02:42.627049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.844 [2024-07-24 20:02:42.627059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.845 [2024-07-24 20:02:42.627066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.845 [2024-07-24 20:02:42.627076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.845 [2024-07-24 20:02:42.627084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.845 [2024-07-24 20:02:42.627094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.845 [2024-07-24 20:02:42.627101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.845 [2024-07-24 20:02:42.627112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.845 [2024-07-24 20:02:42.627120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.845 [2024-07-24 20:02:42.627130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.845 [2024-07-24 20:02:42.627138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.845 [2024-07-24 20:02:42.627148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.845 [2024-07-24 20:02:42.627156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.845 [2024-07-24 20:02:42.627166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.845 [2024-07-24 20:02:42.627174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.845 [2024-07-24 20:02:42.627185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.845 [2024-07-24 20:02:42.627193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.845 [2024-07-24 20:02:42.627207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.845 [2024-07-24 20:02:42.627214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.845 [2024-07-24 20:02:42.627225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.845 [2024-07-24 20:02:42.627234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.845 [2024-07-24 20:02:42.627244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.845 [2024-07-24 20:02:42.627252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.845 [2024-07-24 20:02:42.627262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.845 [2024-07-24 20:02:42.627270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.845 [2024-07-24 20:02:42.627279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.845 [2024-07-24 20:02:42.627288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.845 [2024-07-24 20:02:42.627297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.845 [2024-07-24 20:02:42.627305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.845 [2024-07-24 20:02:42.627315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.845 [2024-07-24 20:02:42.627323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.845 [2024-07-24 20:02:42.627333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.845 [2024-07-24 20:02:42.627340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.845 [2024-07-24 20:02:42.627350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.845 [2024-07-24 20:02:42.627359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.845 [2024-07-24 20:02:42.627369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.845 [2024-07-24 20:02:42.627377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.845 [2024-07-24 20:02:42.627387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.845 [2024-07-24 20:02:42.627396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.845 [2024-07-24 20:02:42.627407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.845 [2024-07-24 20:02:42.627416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.845 [2024-07-24 20:02:42.627426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.845 [2024-07-24 20:02:42.627434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.845 [2024-07-24 20:02:42.627444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.845 [2024-07-24 20:02:42.627451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.845 [2024-07-24 20:02:42.627461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.845 [2024-07-24 20:02:42.627469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.845 [2024-07-24 20:02:42.627478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.845 [2024-07-24 20:02:42.627487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.845 [2024-07-24 20:02:42.627496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.845 [2024-07-24 20:02:42.627504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.845 [2024-07-24 20:02:42.627514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.845 [2024-07-24 20:02:42.627521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.845 [2024-07-24 20:02:42.627530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.845 [2024-07-24 20:02:42.627539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.845 [2024-07-24 20:02:42.627548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.845 [2024-07-24 20:02:42.627556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.845 [2024-07-24 20:02:42.627566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.845 [2024-07-24 20:02:42.627574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.845 [2024-07-24 20:02:42.627583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.845 [2024-07-24 20:02:42.627591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.845 [2024-07-24 20:02:42.627601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.845 [2024-07-24 20:02:42.627609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.845 [2024-07-24 20:02:42.627619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.845 [2024-07-24 20:02:42.627627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.845 [2024-07-24 20:02:42.627638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.845 [2024-07-24 20:02:42.627646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.845 [2024-07-24 20:02:42.627656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.845 [2024-07-24 20:02:42.627663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.845 [2024-07-24 20:02:42.627674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.845 [2024-07-24 20:02:42.627681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.845 [2024-07-24 20:02:42.627691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.845 [2024-07-24 20:02:42.627700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.845 [2024-07-24 20:02:42.627710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.845 [2024-07-24 20:02:42.627717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.845 [2024-07-24 20:02:42.627727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.846 [2024-07-24 20:02:42.627735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.846 [2024-07-24 20:02:42.627747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.846 [2024-07-24 20:02:42.627755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.846 [2024-07-24 20:02:42.627765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.846 [2024-07-24 20:02:42.627773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.846 [2024-07-24 20:02:42.627782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.846 [2024-07-24 20:02:42.627790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.846 [2024-07-24 20:02:42.627800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.846 [2024-07-24 20:02:42.627807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.846 [2024-07-24 20:02:42.627817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.846 [2024-07-24 20:02:42.627825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.846 [2024-07-24 20:02:42.627834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.846 [2024-07-24 20:02:42.627843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.846 [2024-07-24 20:02:42.627853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.846 [2024-07-24 20:02:42.627862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.846 [2024-07-24 20:02:42.627873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.846 [2024-07-24 20:02:42.627881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.846 [2024-07-24 20:02:42.627890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.846 [2024-07-24 20:02:42.627898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.846 [2024-07-24 20:02:42.627907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.846 [2024-07-24 20:02:42.627915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.846 [2024-07-24 20:02:42.627925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.846 [2024-07-24 20:02:42.627933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.846 [2024-07-24 20:02:42.627943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.846 [2024-07-24 20:02:42.627950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.846 [2024-07-24 20:02:42.627960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.846 [2024-07-24 20:02:42.627968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.846 [2024-07-24 20:02:42.627977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd50540 is same with the state(5) to be set 00:22:54.846 [2024-07-24 20:02:42.629243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.846 [2024-07-24 20:02:42.629256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.846 [2024-07-24 20:02:42.629268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.846 [2024-07-24 20:02:42.629276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.846 [2024-07-24 20:02:42.629286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.846 [2024-07-24 20:02:42.629294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.846 [2024-07-24 20:02:42.629305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.846 [2024-07-24 20:02:42.629313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.846 [2024-07-24 20:02:42.629322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.846 [2024-07-24 20:02:42.629331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.846 [2024-07-24 20:02:42.629340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.846 [2024-07-24 20:02:42.629349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.846 [2024-07-24 20:02:42.629361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.846 [2024-07-24 20:02:42.629369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.846 [2024-07-24 20:02:42.629378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.846 [2024-07-24 20:02:42.629387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.846 [2024-07-24 20:02:42.629396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.846 [2024-07-24 20:02:42.629404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.846 [2024-07-24 20:02:42.629414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.846 [2024-07-24 20:02:42.629421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.846 [2024-07-24 20:02:42.629432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.846 [2024-07-24 20:02:42.629440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.846 [2024-07-24 20:02:42.629451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.846 [2024-07-24 20:02:42.629459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.846 [2024-07-24 20:02:42.629469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.846 [2024-07-24 20:02:42.629477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.846 [2024-07-24 20:02:42.629487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.846 [2024-07-24 20:02:42.629494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.846 [2024-07-24 20:02:42.629505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.846 [2024-07-24 20:02:42.629512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.846 [2024-07-24 20:02:42.629523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.846 [2024-07-24 20:02:42.629531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.846 [2024-07-24 20:02:42.629541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.846 [2024-07-24 20:02:42.629548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.846 [2024-07-24 20:02:42.629558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.846 [2024-07-24 20:02:42.629566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.846 [2024-07-24 20:02:42.629576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.846 [2024-07-24 20:02:42.629587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.846 [2024-07-24 20:02:42.629597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.846 [2024-07-24 20:02:42.629605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.846 [2024-07-24 20:02:42.629615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.846 [2024-07-24 20:02:42.629624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.846 [2024-07-24 20:02:42.629634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.846 [2024-07-24 20:02:42.629642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.846 [2024-07-24 20:02:42.629652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.846 [2024-07-24 20:02:42.629660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.846 [2024-07-24 20:02:42.629670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.846 [2024-07-24 20:02:42.629678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.846 [2024-07-24 20:02:42.629688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.846 [2024-07-24 20:02:42.629695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.847 [2024-07-24 20:02:42.629706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.847 [2024-07-24 20:02:42.629713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.847 [2024-07-24 20:02:42.629723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.847 [2024-07-24 20:02:42.629731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.847 [2024-07-24 20:02:42.629740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.847 [2024-07-24 20:02:42.629749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.847 [2024-07-24 20:02:42.629758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.847 [2024-07-24 20:02:42.629767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.847 [2024-07-24 20:02:42.629776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.847 [2024-07-24 20:02:42.629785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.847 [2024-07-24 20:02:42.629794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.847 [2024-07-24 20:02:42.629802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.847 [2024-07-24 20:02:42.629814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.847 [2024-07-24 20:02:42.629821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.847 [2024-07-24 20:02:42.629831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.847 [2024-07-24 20:02:42.629839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.847 [2024-07-24 20:02:42.629849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.847 [2024-07-24 20:02:42.629856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.847 [2024-07-24 20:02:42.629867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.847 [2024-07-24 20:02:42.629875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.847 [2024-07-24 20:02:42.629885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.847 [2024-07-24 20:02:42.629892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.847 [2024-07-24 20:02:42.629902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.847 [2024-07-24 20:02:42.629910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.847 [2024-07-24 20:02:42.629920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.847 [2024-07-24 20:02:42.629927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.847 [2024-07-24 20:02:42.629937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.847 [2024-07-24 20:02:42.629946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.847 [2024-07-24 20:02:42.629955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.847 [2024-07-24 20:02:42.629963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.847 [2024-07-24 20:02:42.629973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.847 [2024-07-24 20:02:42.629980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.847 [2024-07-24 20:02:42.629990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.847 [2024-07-24 20:02:42.629998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.847 [2024-07-24 20:02:42.630008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.847 [2024-07-24 20:02:42.630016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.847 [2024-07-24 20:02:42.630026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.847 [2024-07-24 20:02:42.630038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.847 [2024-07-24 20:02:42.630048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.847 [2024-07-24 20:02:42.630056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.847 [2024-07-24 20:02:42.630067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.847 [2024-07-24 20:02:42.630076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.847 [2024-07-24 20:02:42.630086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.847 [2024-07-24 20:02:42.630094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.847 [2024-07-24 20:02:42.630104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.847 [2024-07-24 20:02:42.630112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.847 [2024-07-24 20:02:42.630122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.847 [2024-07-24 20:02:42.630130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.847 [2024-07-24 20:02:42.630140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.847 [2024-07-24 20:02:42.630148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.847 [2024-07-24 20:02:42.630158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.847 [2024-07-24 20:02:42.630165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.847 [2024-07-24 20:02:42.630175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.847 [2024-07-24 20:02:42.630183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.847 [2024-07-24 20:02:42.630193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.847 [2024-07-24 20:02:42.630204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.847 [2024-07-24 20:02:42.630215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.847 [2024-07-24 20:02:42.630222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.847 [2024-07-24 20:02:42.630232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.847 [2024-07-24 20:02:42.630240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.847 [2024-07-24 20:02:42.630251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.847 [2024-07-24 20:02:42.630258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.848 [2024-07-24 20:02:42.630270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.848 [2024-07-24 20:02:42.630279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.848 [2024-07-24 20:02:42.630289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.848 [2024-07-24 20:02:42.630296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.848 [2024-07-24 20:02:42.630306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.848 [2024-07-24 20:02:42.630314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.848 [2024-07-24 20:02:42.630324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.848 [2024-07-24 20:02:42.630331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.848 [2024-07-24 20:02:42.630341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.848 [2024-07-24 20:02:42.630348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.848 [2024-07-24 20:02:42.630359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.848 [2024-07-24 20:02:42.630366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.848 [2024-07-24 20:02:42.630376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.848 [2024-07-24 20:02:42.630385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.848 [2024-07-24 20:02:42.630395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.848 [2024-07-24 20:02:42.630404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.848 [2024-07-24 20:02:42.630414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5d710 is same with the state(5) to be set 00:22:54.848 [2024-07-24 20:02:42.631680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.848 [2024-07-24 20:02:42.631695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.848 [2024-07-24 20:02:42.631709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.848 [2024-07-24 20:02:42.631720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.848 [2024-07-24 20:02:42.631731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.848 [2024-07-24 20:02:42.631742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.848 [2024-07-24 20:02:42.631754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.848 [2024-07-24 20:02:42.631764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.848 [2024-07-24 20:02:42.631779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.848 [2024-07-24 20:02:42.631789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.848 [2024-07-24 20:02:42.631801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.848 [2024-07-24 20:02:42.631810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.848 [2024-07-24 20:02:42.631820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.848 [2024-07-24 20:02:42.631828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.848 [2024-07-24 20:02:42.631838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.848 [2024-07-24 20:02:42.631846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.848 [2024-07-24 20:02:42.631857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.848 [2024-07-24 20:02:42.631864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.848 [2024-07-24 20:02:42.631875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.848 [2024-07-24 20:02:42.631882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.848 [2024-07-24 20:02:42.631893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.848 [2024-07-24 20:02:42.631900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.848 [2024-07-24 20:02:42.631910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.848 [2024-07-24 20:02:42.631918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.848 [2024-07-24 20:02:42.631928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.848 [2024-07-24 20:02:42.631936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.848 [2024-07-24 20:02:42.631945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.848 [2024-07-24 20:02:42.631954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.848 [2024-07-24 20:02:42.631963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.848 [2024-07-24 20:02:42.631971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.848 [2024-07-24 20:02:42.631980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.848 [2024-07-24 20:02:42.631988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.848 [2024-07-24 20:02:42.631998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.848 [2024-07-24 20:02:42.632008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.848 [2024-07-24 20:02:42.632018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.848 [2024-07-24 20:02:42.632027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.848 [2024-07-24 20:02:42.632036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.848 [2024-07-24 20:02:42.632045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.848 [2024-07-24 20:02:42.632054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.848 [2024-07-24 20:02:42.632062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.848 [2024-07-24 20:02:42.632072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.848 [2024-07-24 20:02:42.632080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.848 [2024-07-24 20:02:42.632090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.848 [2024-07-24 20:02:42.632097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.848 [2024-07-24 20:02:42.632107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.848 [2024-07-24 20:02:42.632115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.848 [2024-07-24 20:02:42.632126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.848 [2024-07-24 20:02:42.632133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.848 [2024-07-24 20:02:42.632143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.848 [2024-07-24 20:02:42.632151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.848 [2024-07-24 20:02:42.632161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.848 [2024-07-24 20:02:42.632169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.848 [2024-07-24 20:02:42.632179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.848 [2024-07-24 20:02:42.632187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.848 [2024-07-24 20:02:42.632197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.848 [2024-07-24 20:02:42.632209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.848 [2024-07-24 20:02:42.632218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.848 [2024-07-24 20:02:42.632226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.848 [2024-07-24 20:02:42.632238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.848 [2024-07-24 20:02:42.632246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.849 [2024-07-24 20:02:42.632255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.849 [2024-07-24 20:02:42.632264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.849 [2024-07-24 20:02:42.632273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.849 [2024-07-24 20:02:42.632281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.849 [2024-07-24 20:02:42.632292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.849 [2024-07-24 20:02:42.632300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.849 [2024-07-24 20:02:42.632310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.849 [2024-07-24 20:02:42.632318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.849 [2024-07-24 20:02:42.632328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.849 [2024-07-24 20:02:42.632336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.849 [2024-07-24 20:02:42.632346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.849 [2024-07-24 20:02:42.632354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.849 [2024-07-24 20:02:42.632365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.849 [2024-07-24 20:02:42.632371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.849 [2024-07-24 20:02:42.632382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.849 [2024-07-24 20:02:42.632390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.849 [2024-07-24 20:02:42.632400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.849 [2024-07-24 20:02:42.632408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.849 [2024-07-24 20:02:42.632418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.849 [2024-07-24 20:02:42.632426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.849 [2024-07-24 20:02:42.632436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.849 [2024-07-24 20:02:42.632444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.849 [2024-07-24 20:02:42.632454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.849 [2024-07-24 20:02:42.632463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.849 [2024-07-24 20:02:42.632473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.849 [2024-07-24 20:02:42.632482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.849 [2024-07-24 20:02:42.632492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.849 [2024-07-24 20:02:42.632500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.849 [2024-07-24 20:02:42.632509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.849 [2024-07-24 20:02:42.632517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.849 [2024-07-24 20:02:42.632527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.849 [2024-07-24 20:02:42.632535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.849 [2024-07-24 20:02:42.632544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.849 [2024-07-24 20:02:42.632552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.849 [2024-07-24 20:02:42.632561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.849 [2024-07-24 20:02:42.632569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.849 [2024-07-24 20:02:42.632579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.849 [2024-07-24 20:02:42.632586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.849 [2024-07-24 20:02:42.632596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.849 [2024-07-24 20:02:42.632604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.849 [2024-07-24 20:02:42.632614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.849 [2024-07-24 20:02:42.632621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.849 [2024-07-24 20:02:42.632631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.849 [2024-07-24 20:02:42.632639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.849 [2024-07-24 20:02:42.632649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.849 [2024-07-24 20:02:42.632658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.849 [2024-07-24 20:02:42.632669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.849 [2024-07-24 20:02:42.632677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.849 [2024-07-24 20:02:42.632689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.849 [2024-07-24 20:02:42.632697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.849 [2024-07-24 20:02:42.632707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.849 [2024-07-24 20:02:42.632715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.849 [2024-07-24 20:02:42.632724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.849 [2024-07-24 20:02:42.632732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.849 [2024-07-24 20:02:42.632742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.849 [2024-07-24 20:02:42.632750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.849 [2024-07-24 20:02:42.632759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.849 [2024-07-24 20:02:42.632766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.849 [2024-07-24 20:02:42.632776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.849 [2024-07-24 20:02:42.632784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.849 [2024-07-24 20:02:42.632794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.849 [2024-07-24 20:02:42.632801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.849 [2024-07-24 20:02:42.632811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.849 [2024-07-24 20:02:42.632819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.849 [2024-07-24 20:02:42.632828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.849 [2024-07-24 20:02:42.632836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.849 [2024-07-24 20:02:42.632846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.849 [2024-07-24 20:02:42.632853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.849 [2024-07-24 20:02:42.632862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5ec10 is same with the state(5) to be set 00:22:54.849 [2024-07-24 20:02:42.634770] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:54.849 [2024-07-24 20:02:42.634796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:54.849 [2024-07-24 20:02:42.634806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:54.849 [2024-07-24 20:02:42.634816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:54.849 [2024-07-24 20:02:42.634909] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:54.849 task offset: 15488 on job bdev=Nvme10n1 fails 00:22:54.849 00:22:54.849 Latency(us) 00:22:54.849 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.849 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:54.849 Job: Nvme1n1 ended in about 0.64 seconds with error 00:22:54.849 Verification LBA range: start 0x0 length 0x400 00:22:54.849 Nvme1n1 : 0.64 198.47 12.40 99.23 0.00 211522.84 23920.64 230686.72 00:22:54.849 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:54.849 Job: Nvme2n1 ended in about 0.63 seconds with error 00:22:54.849 Verification LBA range: start 0x0 length 0x400 00:22:54.850 Nvme2n1 : 0.63 202.32 12.65 101.16 0.00 200944.07 25449.81 215831.89 00:22:54.850 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:54.850 Job: Nvme3n1 ended in about 0.65 seconds with error 00:22:54.850 Verification LBA range: start 0x0 length 0x400 00:22:54.850 Nvme3n1 : 0.65 97.89 6.12 97.89 0.00 302441.81 66409.81 225443.84 00:22:54.850 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:54.850 Job: Nvme4n1 ended in about 0.64 seconds with error 00:22:54.850 Verification LBA range: start 0x0 length 0x400 00:22:54.850 Nvme4n1 : 0.64 200.88 12.55 100.44 0.00 189531.88 21736.11 248162.99 00:22:54.850 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:54.850 Job: Nvme5n1 ended in about 0.66 seconds with error 00:22:54.850 Verification LBA range: start 0x0 length 0x400 00:22:54.850 Nvme5n1 : 0.66 97.52 6.10 97.52 0.00 284311.89 46530.56 244667.73 00:22:54.850 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:54.850 Job: Nvme6n1 ended in about 0.66 seconds with error 00:22:54.850 Verification LBA range: start 0x0 length 0x400 00:22:54.850 Nvme6n1 : 0.66 97.16 6.07 97.16 0.00 275863.89 24576.00 244667.73 00:22:54.850 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:54.850 Job: Nvme7n1 ended in about 0.66 seconds with error 00:22:54.850 Verification LBA range: start 0x0 length 0x400 00:22:54.850 Nvme7n1 : 0.66 107.39 6.71 96.81 0.00 253447.53 24466.77 283115.52 00:22:54.850 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:54.850 Job: Nvme8n1 ended in about 0.66 seconds with error 00:22:54.850 Verification LBA range: start 0x0 length 0x400 00:22:54.850 Nvme8n1 : 0.66 96.45 6.03 96.45 0.00 258928.64 25340.59 258648.75 00:22:54.850 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:54.850 Job: Nvme9n1 ended in about 0.64 seconds with error 00:22:54.850 Verification LBA range: start 0x0 length 0x400 00:22:54.850 Nvme9n1 : 0.64 200.47 12.53 100.24 0.00 157834.24 8192.00 239424.85 00:22:54.850 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:54.850 Job: Nvme10n1 ended in about 0.63 seconds with error 00:22:54.850 Verification LBA range: start 0x0 length 0x400 00:22:54.850 Nvme10n1 : 0.63 101.49 6.34 101.49 0.00 223492.69 22500.69 309329.92 00:22:54.850 =================================================================================================================== 00:22:54.850 Total : 1400.06 87.50 988.40 0.00 228300.93 8192.00 309329.92 00:22:54.850 [2024-07-24 20:02:42.661897] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:54.850 [2024-07-24 20:02:42.661932] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:54.850 [2024-07-24 20:02:42.662353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.850 [2024-07-24 20:02:42.662371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36e30 with addr=10.0.0.2, port=4420 00:22:54.850 [2024-07-24 20:02:42.662380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36e30 is same with the state(5) to be set 00:22:54.850 [2024-07-24 20:02:42.662791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.850 [2024-07-24 20:02:42.662807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x77b340 with addr=10.0.0.2, port=4420 00:22:54.850 [2024-07-24 20:02:42.662816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77b340 is same with the state(5) to be set 00:22:54.850 [2024-07-24 20:02:42.663287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.850 [2024-07-24 20:02:42.663298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdcd1a0 with addr=10.0.0.2, port=4420 00:22:54.850 [2024-07-24 20:02:42.663305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdcd1a0 is same with the state(5) to be set 00:22:54.850 [2024-07-24 20:02:42.663605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.850 [2024-07-24 20:02:42.663616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdccf00 with addr=10.0.0.2, port=4420 00:22:54.850 [2024-07-24 20:02:42.663623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdccf00 is same with the state(5) to be set 00:22:54.850 [2024-07-24 20:02:42.664998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:54.850 [2024-07-24 20:02:42.665011] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:54.850 [2024-07-24 20:02:42.665020] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:54.850 [2024-07-24 20:02:42.665029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:54.850 [2024-07-24 20:02:42.665038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:54.850 [2024-07-24 20:02:42.665224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.850 [2024-07-24 20:02:42.665237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdef770 with addr=10.0.0.2, port=4420 00:22:54.850 [2024-07-24 20:02:42.665245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdef770 is same with the state(5) to be set 00:22:54.850 [2024-07-24 20:02:42.665258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36e30 (9): Bad file descriptor 00:22:54.850 [2024-07-24 20:02:42.665270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x77b340 (9): Bad file descriptor 00:22:54.850 [2024-07-24 20:02:42.665280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdcd1a0 (9): Bad file descriptor 00:22:54.850 [2024-07-24 20:02:42.665290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdccf00 (9): Bad file descriptor 00:22:54.850 [2024-07-24 20:02:42.665320] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:54.850 [2024-07-24 20:02:42.665332] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:54.850 [2024-07-24 20:02:42.665343] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:54.850 [2024-07-24 20:02:42.665353] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:54.850 [2024-07-24 20:02:42.665900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.850 [2024-07-24 20:02:42.665915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdef480 with addr=10.0.0.2, port=4420 00:22:54.850 [2024-07-24 20:02:42.665922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdef480 is same with the state(5) to be set 00:22:54.850 [2024-07-24 20:02:42.666263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.850 [2024-07-24 20:02:42.666274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc295d0 with addr=10.0.0.2, port=4420 00:22:54.850 [2024-07-24 20:02:42.666281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc295d0 is same with the state(5) to be set 00:22:54.850 [2024-07-24 20:02:42.666519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.850 [2024-07-24 20:02:42.666529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd86d00 with addr=10.0.0.2, port=4420 00:22:54.850 [2024-07-24 20:02:42.666536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86d00 is same with the state(5) to be set 00:22:54.850 [2024-07-24 20:02:42.666872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.850 [2024-07-24 20:02:42.666882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf5250 with addr=10.0.0.2, port=4420 00:22:54.850 [2024-07-24 20:02:42.666889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf5250 is same with the state(5) to be set 00:22:54.850 [2024-07-24 20:02:42.667348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.850 [2024-07-24 20:02:42.667359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4daf0 with addr=10.0.0.2, port=4420 00:22:54.850 [2024-07-24 20:02:42.667367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc4daf0 is same with the state(5) to be set 00:22:54.850 [2024-07-24 20:02:42.667376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdef770 (9): Bad file descriptor 00:22:54.850 [2024-07-24 20:02:42.667384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:54.850 [2024-07-24 20:02:42.667391] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:54.850 [2024-07-24 20:02:42.667399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:54.850 [2024-07-24 20:02:42.667411] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:54.850 [2024-07-24 20:02:42.667417] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:54.850 [2024-07-24 20:02:42.667424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:54.850 [2024-07-24 20:02:42.667435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:54.850 [2024-07-24 20:02:42.667441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:54.850 [2024-07-24 20:02:42.667449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:54.850 [2024-07-24 20:02:42.667458] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:54.850 [2024-07-24 20:02:42.667466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:54.850 [2024-07-24 20:02:42.667472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:54.850 [2024-07-24 20:02:42.667536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:54.850 [2024-07-24 20:02:42.667545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:54.850 [2024-07-24 20:02:42.667551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:54.850 [2024-07-24 20:02:42.667557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:54.850 [2024-07-24 20:02:42.667565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdef480 (9): Bad file descriptor 00:22:54.850 [2024-07-24 20:02:42.667576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc295d0 (9): Bad file descriptor 00:22:54.850 [2024-07-24 20:02:42.667585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd86d00 (9): Bad file descriptor 00:22:54.850 [2024-07-24 20:02:42.667594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf5250 (9): Bad file descriptor 00:22:54.850 [2024-07-24 20:02:42.667606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc4daf0 (9): Bad file descriptor 00:22:54.850 [2024-07-24 20:02:42.667615] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:54.851 [2024-07-24 20:02:42.667621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:54.851 [2024-07-24 20:02:42.667629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:54.851 [2024-07-24 20:02:42.667657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:54.851 [2024-07-24 20:02:42.667665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:54.851 [2024-07-24 20:02:42.667671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:54.851 [2024-07-24 20:02:42.667680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:54.851 [2024-07-24 20:02:42.667690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:54.851 [2024-07-24 20:02:42.667696] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:54.851 [2024-07-24 20:02:42.667703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:54.851 [2024-07-24 20:02:42.667713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:54.851 [2024-07-24 20:02:42.667721] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:54.851 [2024-07-24 20:02:42.667729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:54.851 [2024-07-24 20:02:42.667738] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:54.851 [2024-07-24 20:02:42.667745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:54.851 [2024-07-24 20:02:42.667751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:54.851 [2024-07-24 20:02:42.667761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:54.851 [2024-07-24 20:02:42.667769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:54.851 [2024-07-24 20:02:42.667776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:54.851 [2024-07-24 20:02:42.668113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:54.851 [2024-07-24 20:02:42.668124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:54.851 [2024-07-24 20:02:42.668130] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:54.851 [2024-07-24 20:02:42.668137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:54.851 [2024-07-24 20:02:42.668144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:55.133 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:22:55.133 20:02:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:22:56.076 20:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3749137 00:22:56.076 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3749137) - No such process 00:22:56.076 20:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:22:56.076 20:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:22:56.076 20:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:56.076 20:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:56.076 20:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:56.076 20:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:56.076 20:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:56.076 20:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:22:56.076 20:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:56.076 20:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:22:56.076 20:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:56.076 20:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:56.076 rmmod nvme_tcp 00:22:56.076 rmmod nvme_fabrics 00:22:56.076 rmmod nvme_keyring 00:22:56.076 20:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:56.076 20:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:22:56.076 20:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:22:56.076 20:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:56.076 20:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:56.076 20:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:56.076 20:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:56.076 20:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:56.076 20:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:56.076 20:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.076 20:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:56.076 20:02:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:58.627 20:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:58.627 00:22:58.627 real 0m7.538s 00:22:58.627 user 0m17.713s 00:22:58.627 sys 0m1.150s 00:22:58.627 20:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:58.627 20:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:58.627 ************************************ 00:22:58.627 END TEST nvmf_shutdown_tc3 00:22:58.627 ************************************ 00:22:58.627 20:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:22:58.627 00:22:58.627 real 0m32.496s 00:22:58.627 user 1m15.355s 00:22:58.627 sys 0m9.444s 00:22:58.627 20:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:58.627 20:02:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:58.627 ************************************ 00:22:58.627 END TEST nvmf_shutdown 00:22:58.627 ************************************ 00:22:58.627 20:02:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:22:58.627 00:22:58.627 real 11m30.323s 00:22:58.627 user 24m34.209s 00:22:58.627 sys 3m23.748s 00:22:58.627 20:02:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:58.627 20:02:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:58.627 ************************************ 00:22:58.627 END TEST nvmf_target_extra 00:22:58.627 ************************************ 00:22:58.627 20:02:46 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:58.627 20:02:46 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:58.627 20:02:46 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:58.627 20:02:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:58.627 ************************************ 00:22:58.627 START TEST nvmf_host 00:22:58.627 ************************************ 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:58.627 * Looking for test storage... 00:22:58.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.627 ************************************ 00:22:58.627 START TEST nvmf_multicontroller 00:22:58.627 ************************************ 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:58.627 * Looking for test storage... 00:22:58.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:58.627 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:58.628 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:58.628 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:58.628 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:58.628 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:58.628 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.628 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.628 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.628 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:58.628 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.628 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:22:58.628 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:58.628 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:58.628 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:58.628 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:58.628 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:58.628 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:58.628 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:58.628 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:58.628 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:58.628 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:58.628 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:58.628 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:58.628 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:58.628 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:58.628 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:58.628 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:58.628 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:58.628 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:58.628 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:58.628 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:58.628 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.628 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:58.628 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:58.628 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:58.628 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:58.628 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:22:58.628 20:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:06.779 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:06.779 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:23:06.779 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:06.779 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:06.779 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:06.779 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:06.779 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:06.779 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:23:06.779 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:06.779 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:23:06.779 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:23:06.779 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:23:06.779 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:23:06.779 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:23:06.779 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:23:06.779 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:06.779 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:06.780 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:06.780 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:06.780 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:06.780 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:06.780 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:06.780 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.583 ms 00:23:06.780 00:23:06.780 --- 10.0.0.2 ping statistics --- 00:23:06.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.780 rtt min/avg/max/mdev = 0.583/0.583/0.583/0.000 ms 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:06.780 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:06.780 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.373 ms 00:23:06.780 00:23:06.780 --- 10.0.0.1 ping statistics --- 00:23:06.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.780 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3754004 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3754004 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 3754004 ']' 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.780 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:06.781 20:02:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:06.781 [2024-07-24 20:02:53.707534] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:23:06.781 [2024-07-24 20:02:53.707601] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:06.781 EAL: No free 2048 kB hugepages reported on node 1 00:23:06.781 [2024-07-24 20:02:53.796138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:06.781 [2024-07-24 20:02:53.891243] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:06.781 [2024-07-24 20:02:53.891304] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:06.781 [2024-07-24 20:02:53.891312] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:06.781 [2024-07-24 20:02:53.891325] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:06.781 [2024-07-24 20:02:53.891331] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:06.781 [2024-07-24 20:02:53.891465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:06.781 [2024-07-24 20:02:53.891772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:06.781 [2024-07-24 20:02:53.891773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:06.781 [2024-07-24 20:02:54.541131] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:06.781 Malloc0 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:06.781 [2024-07-24 20:02:54.618712] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:06.781 [2024-07-24 20:02:54.630652] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:06.781 Malloc1 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3754247 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3754247 /var/tmp/bdevperf.sock 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 3754247 ']' 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:06.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:06.781 20:02:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.724 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:07.724 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:23:07.724 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:07.724 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.724 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.724 NVMe0n1 00:23:07.724 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.724 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:07.724 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:07.724 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.724 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.724 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.724 1 00:23:07.724 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:07.724 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:07.724 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:07.724 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:07.724 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:07.724 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:07.724 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:07.724 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:07.724 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.724 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.724 request: 00:23:07.724 { 00:23:07.725 "name": "NVMe0", 00:23:07.725 "trtype": "tcp", 00:23:07.725 "traddr": "10.0.0.2", 00:23:07.725 "adrfam": "ipv4", 00:23:07.725 "trsvcid": "4420", 00:23:07.725 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:07.725 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:07.725 "hostaddr": "10.0.0.2", 00:23:07.725 "hostsvcid": "60000", 00:23:07.725 "prchk_reftag": false, 00:23:07.725 "prchk_guard": false, 00:23:07.725 "hdgst": false, 00:23:07.725 "ddgst": false, 00:23:07.725 "method": "bdev_nvme_attach_controller", 00:23:07.725 "req_id": 1 00:23:07.725 } 00:23:07.725 Got JSON-RPC error response 00:23:07.725 response: 00:23:07.725 { 00:23:07.725 "code": -114, 00:23:07.725 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:07.725 } 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.725 request: 00:23:07.725 { 00:23:07.725 "name": "NVMe0", 00:23:07.725 "trtype": "tcp", 00:23:07.725 "traddr": "10.0.0.2", 00:23:07.725 "adrfam": "ipv4", 00:23:07.725 "trsvcid": "4420", 00:23:07.725 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:07.725 "hostaddr": "10.0.0.2", 00:23:07.725 "hostsvcid": "60000", 00:23:07.725 "prchk_reftag": false, 00:23:07.725 "prchk_guard": false, 00:23:07.725 "hdgst": false, 00:23:07.725 "ddgst": false, 00:23:07.725 "method": "bdev_nvme_attach_controller", 00:23:07.725 "req_id": 1 00:23:07.725 } 00:23:07.725 Got JSON-RPC error response 00:23:07.725 response: 00:23:07.725 { 00:23:07.725 "code": -114, 00:23:07.725 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:07.725 } 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.725 request: 00:23:07.725 { 00:23:07.725 "name": "NVMe0", 00:23:07.725 "trtype": "tcp", 00:23:07.725 "traddr": "10.0.0.2", 00:23:07.725 "adrfam": "ipv4", 00:23:07.725 "trsvcid": "4420", 00:23:07.725 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:07.725 "hostaddr": "10.0.0.2", 00:23:07.725 "hostsvcid": "60000", 00:23:07.725 "prchk_reftag": false, 00:23:07.725 "prchk_guard": false, 00:23:07.725 "hdgst": false, 00:23:07.725 "ddgst": false, 00:23:07.725 "multipath": "disable", 00:23:07.725 "method": "bdev_nvme_attach_controller", 00:23:07.725 "req_id": 1 00:23:07.725 } 00:23:07.725 Got JSON-RPC error response 00:23:07.725 response: 00:23:07.725 { 00:23:07.725 "code": -114, 00:23:07.725 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:23:07.725 } 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.725 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.987 request: 00:23:07.987 { 00:23:07.987 "name": "NVMe0", 00:23:07.987 "trtype": "tcp", 00:23:07.987 "traddr": "10.0.0.2", 00:23:07.987 "adrfam": "ipv4", 00:23:07.987 "trsvcid": "4420", 00:23:07.987 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:07.987 "hostaddr": "10.0.0.2", 00:23:07.987 "hostsvcid": "60000", 00:23:07.987 "prchk_reftag": false, 00:23:07.987 "prchk_guard": false, 00:23:07.987 "hdgst": false, 00:23:07.987 "ddgst": false, 00:23:07.987 "multipath": "failover", 00:23:07.987 "method": "bdev_nvme_attach_controller", 00:23:07.987 "req_id": 1 00:23:07.987 } 00:23:07.987 Got JSON-RPC error response 00:23:07.987 response: 00:23:07.987 { 00:23:07.987 "code": -114, 00:23:07.987 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:07.987 } 00:23:07.987 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:07.987 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:07.987 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:07.987 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:07.987 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:07.987 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:07.987 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.987 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.987 00:23:07.987 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.987 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:07.987 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.987 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.987 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.987 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:07.987 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.987 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.987 00:23:07.987 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.987 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:07.987 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:07.987 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.987 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:07.987 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.248 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:08.248 20:02:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:09.190 0 00:23:09.191 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:09.191 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.191 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:09.191 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.191 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3754247 00:23:09.191 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 3754247 ']' 00:23:09.191 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 3754247 00:23:09.191 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:23:09.191 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:09.191 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3754247 00:23:09.191 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:09.191 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:09.191 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3754247' 00:23:09.191 killing process with pid 3754247 00:23:09.191 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 3754247 00:23:09.191 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 3754247 00:23:09.452 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:09.452 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.452 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:09.452 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.452 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:09.452 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.452 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:09.452 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.452 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:09.452 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:09.452 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:09.452 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:09.452 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:23:09.452 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:23:09.452 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:09.452 [2024-07-24 20:02:54.751082] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:23:09.452 [2024-07-24 20:02:54.751138] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3754247 ] 00:23:09.452 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.452 [2024-07-24 20:02:54.809620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.452 [2024-07-24 20:02:54.873670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:09.452 [2024-07-24 20:02:55.919586] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 6080f781-499e-470c-a59b-3b028cb7c3b1 already exists 00:23:09.452 [2024-07-24 20:02:55.919618] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:6080f781-499e-470c-a59b-3b028cb7c3b1 alias for bdev NVMe1n1 00:23:09.452 [2024-07-24 20:02:55.919626] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:09.452 Running I/O for 1 seconds... 00:23:09.452 00:23:09.452 Latency(us) 00:23:09.452 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.452 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:09.452 NVMe0n1 : 1.00 29157.88 113.90 0.00 0.00 4380.18 3440.64 16165.55 00:23:09.452 =================================================================================================================== 00:23:09.452 Total : 29157.88 113.90 0.00 0.00 4380.18 3440.64 16165.55 00:23:09.452 Received shutdown signal, test time was about 1.000000 seconds 00:23:09.452 00:23:09.452 Latency(us) 00:23:09.452 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.452 =================================================================================================================== 00:23:09.452 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:09.452 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:09.452 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:09.452 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:09.452 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:09.452 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:09.452 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:23:09.452 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:09.452 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:23:09.452 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:09.452 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:09.452 rmmod nvme_tcp 00:23:09.452 rmmod nvme_fabrics 00:23:09.452 rmmod nvme_keyring 00:23:09.452 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:09.452 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:23:09.452 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:23:09.452 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3754004 ']' 00:23:09.452 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3754004 00:23:09.452 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 3754004 ']' 00:23:09.452 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 3754004 00:23:09.452 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:23:09.452 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:09.452 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3754004 00:23:09.713 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:09.713 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:09.713 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3754004' 00:23:09.713 killing process with pid 3754004 00:23:09.713 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 3754004 00:23:09.713 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 3754004 00:23:09.713 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:09.713 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:09.713 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:09.713 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:09.713 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:09.713 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.713 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:09.713 20:02:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:12.261 20:02:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:12.261 00:23:12.261 real 0m13.297s 00:23:12.261 user 0m15.948s 00:23:12.261 sys 0m6.050s 00:23:12.261 20:02:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:12.261 20:02:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:12.261 ************************************ 00:23:12.261 END TEST nvmf_multicontroller 00:23:12.261 ************************************ 00:23:12.261 20:02:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:12.261 20:02:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:12.261 20:02:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:12.261 20:02:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.261 ************************************ 00:23:12.261 START TEST nvmf_aer 00:23:12.261 ************************************ 00:23:12.261 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:12.261 * Looking for test storage... 00:23:12.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:12.261 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:12.261 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:12.261 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:12.261 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:12.261 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:12.261 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:12.261 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:12.261 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:12.261 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:12.261 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:12.261 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:12.261 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:12.261 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:12.261 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:12.261 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:12.261 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:12.261 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:12.261 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:12.261 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:12.261 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:12.261 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:12.261 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:12.261 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.261 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.261 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.261 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:12.262 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.262 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:23:12.262 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:12.262 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:12.262 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:12.262 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:12.262 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:12.262 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:12.262 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:12.262 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:12.262 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:12.262 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:12.262 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:12.262 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:12.262 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:12.262 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:12.262 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.262 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:12.262 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:12.262 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:12.262 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:12.262 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:23:12.262 20:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:18.852 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:18.852 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:23:18.852 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:18.852 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:18.852 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:18.852 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:18.852 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:18.852 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:23:18.852 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:18.852 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:23:18.852 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:23:18.852 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:23:18.852 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:23:18.852 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:23:18.852 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:23:18.852 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:18.852 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:18.852 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:18.852 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:18.852 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:18.852 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:18.853 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:18.853 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:18.853 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:18.853 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:18.853 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:19.115 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:19.115 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:19.115 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:19.115 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:19.115 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:19.115 20:03:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:19.115 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:19.115 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:19.115 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.688 ms 00:23:19.115 00:23:19.115 --- 10.0.0.2 ping statistics --- 00:23:19.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.115 rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms 00:23:19.115 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:19.115 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:19.115 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.414 ms 00:23:19.115 00:23:19.115 --- 10.0.0.1 ping statistics --- 00:23:19.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.115 rtt min/avg/max/mdev = 0.414/0.414/0.414/0.000 ms 00:23:19.115 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:19.115 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:23:19.115 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:19.115 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:19.115 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:19.115 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:19.115 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:19.115 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:19.115 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:19.115 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:19.115 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:19.115 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:19.115 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:19.375 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3759029 00:23:19.375 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:19.375 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3759029 00:23:19.375 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 3759029 ']' 00:23:19.375 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.375 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:19.375 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:19.375 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:19.375 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:19.375 [2024-07-24 20:03:07.127078] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:23:19.375 [2024-07-24 20:03:07.127154] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:19.375 EAL: No free 2048 kB hugepages reported on node 1 00:23:19.375 [2024-07-24 20:03:07.197516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:19.375 [2024-07-24 20:03:07.272646] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:19.375 [2024-07-24 20:03:07.272685] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:19.375 [2024-07-24 20:03:07.272693] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:19.375 [2024-07-24 20:03:07.272700] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:19.375 [2024-07-24 20:03:07.272706] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:19.375 [2024-07-24 20:03:07.272841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:19.375 [2024-07-24 20:03:07.272956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:19.375 [2024-07-24 20:03:07.273094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.375 [2024-07-24 20:03:07.273096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:20.351 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:20.351 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:23:20.351 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:20.351 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:20.351 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:20.351 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:20.351 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:20.351 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.351 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:20.351 [2024-07-24 20:03:07.952146] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:20.351 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.351 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:20.351 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.351 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:20.351 Malloc0 00:23:20.351 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.351 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:20.351 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.351 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:20.351 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.351 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:20.351 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.351 20:03:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:20.351 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.351 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:20.351 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.351 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:20.351 [2024-07-24 20:03:08.011365] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:20.351 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.351 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:20.351 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.351 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:20.351 [ 00:23:20.351 { 00:23:20.351 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:20.351 "subtype": "Discovery", 00:23:20.351 "listen_addresses": [], 00:23:20.351 "allow_any_host": true, 00:23:20.351 "hosts": [] 00:23:20.351 }, 00:23:20.351 { 00:23:20.351 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.351 "subtype": "NVMe", 00:23:20.351 "listen_addresses": [ 00:23:20.351 { 00:23:20.351 "trtype": "TCP", 00:23:20.351 "adrfam": "IPv4", 00:23:20.351 "traddr": "10.0.0.2", 00:23:20.351 "trsvcid": "4420" 00:23:20.351 } 00:23:20.351 ], 00:23:20.351 "allow_any_host": true, 00:23:20.351 "hosts": [], 00:23:20.351 "serial_number": "SPDK00000000000001", 00:23:20.351 "model_number": "SPDK bdev Controller", 00:23:20.351 "max_namespaces": 2, 00:23:20.351 "min_cntlid": 1, 00:23:20.351 "max_cntlid": 65519, 00:23:20.351 "namespaces": [ 00:23:20.351 { 00:23:20.351 "nsid": 1, 00:23:20.351 "bdev_name": "Malloc0", 00:23:20.351 "name": "Malloc0", 00:23:20.351 "nguid": "7ED1717B89B2438CA785CD5140142033", 00:23:20.351 "uuid": "7ed1717b-89b2-438c-a785-cd5140142033" 00:23:20.351 } 00:23:20.351 ] 00:23:20.351 } 00:23:20.351 ] 00:23:20.351 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.351 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:20.351 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:20.351 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3759248 00:23:20.351 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:20.351 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:20.351 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:23:20.351 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:20.351 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:23:20.351 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:23:20.351 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:20.351 EAL: No free 2048 kB hugepages reported on node 1 00:23:20.351 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:20.351 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:23:20.351 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:23:20.351 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:20.351 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:20.351 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:20.351 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:23:20.351 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:20.351 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.351 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:20.351 Malloc1 00:23:20.351 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.351 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:20.351 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.351 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:20.351 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.351 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:20.351 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.351 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:20.612 Asynchronous Event Request test 00:23:20.612 Attaching to 10.0.0.2 00:23:20.612 Attached to 10.0.0.2 00:23:20.612 Registering asynchronous event callbacks... 00:23:20.612 Starting namespace attribute notice tests for all controllers... 00:23:20.612 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:20.612 aer_cb - Changed Namespace 00:23:20.612 Cleaning up... 00:23:20.612 [ 00:23:20.612 { 00:23:20.612 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:20.612 "subtype": "Discovery", 00:23:20.612 "listen_addresses": [], 00:23:20.612 "allow_any_host": true, 00:23:20.612 "hosts": [] 00:23:20.612 }, 00:23:20.612 { 00:23:20.612 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.612 "subtype": "NVMe", 00:23:20.612 "listen_addresses": [ 00:23:20.612 { 00:23:20.612 "trtype": "TCP", 00:23:20.612 "adrfam": "IPv4", 00:23:20.612 "traddr": "10.0.0.2", 00:23:20.612 "trsvcid": "4420" 00:23:20.612 } 00:23:20.612 ], 00:23:20.612 "allow_any_host": true, 00:23:20.612 "hosts": [], 00:23:20.612 "serial_number": "SPDK00000000000001", 00:23:20.612 "model_number": "SPDK bdev Controller", 00:23:20.612 "max_namespaces": 2, 00:23:20.612 "min_cntlid": 1, 00:23:20.612 "max_cntlid": 65519, 00:23:20.612 "namespaces": [ 00:23:20.612 { 00:23:20.612 "nsid": 1, 00:23:20.612 "bdev_name": "Malloc0", 00:23:20.612 "name": "Malloc0", 00:23:20.612 "nguid": "7ED1717B89B2438CA785CD5140142033", 00:23:20.612 "uuid": "7ed1717b-89b2-438c-a785-cd5140142033" 00:23:20.612 }, 00:23:20.612 { 00:23:20.612 "nsid": 2, 00:23:20.612 "bdev_name": "Malloc1", 00:23:20.612 "name": "Malloc1", 00:23:20.612 "nguid": "A676D08162AA46F8838A36E925A59604", 00:23:20.612 "uuid": "a676d081-62aa-46f8-838a-36e925a59604" 00:23:20.612 } 00:23:20.612 ] 00:23:20.612 } 00:23:20.612 ] 00:23:20.612 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.612 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3759248 00:23:20.612 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:20.612 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.612 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:20.612 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.612 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:20.612 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.612 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:20.612 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.612 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:20.612 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.612 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:20.612 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.612 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:20.612 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:20.612 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:20.612 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:23:20.612 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:20.612 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:23:20.612 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:20.612 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:20.612 rmmod nvme_tcp 00:23:20.612 rmmod nvme_fabrics 00:23:20.612 rmmod nvme_keyring 00:23:20.612 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:20.612 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:23:20.612 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:23:20.612 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3759029 ']' 00:23:20.612 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3759029 00:23:20.612 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 3759029 ']' 00:23:20.612 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 3759029 00:23:20.612 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:23:20.612 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:20.612 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3759029 00:23:20.612 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:20.612 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:20.612 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3759029' 00:23:20.612 killing process with pid 3759029 00:23:20.612 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 3759029 00:23:20.612 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 3759029 00:23:20.872 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:20.872 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:20.872 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:20.872 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:20.872 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:20.872 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.872 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:20.872 20:03:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.787 20:03:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:22.787 00:23:22.787 real 0m10.963s 00:23:22.787 user 0m7.451s 00:23:22.787 sys 0m5.786s 00:23:22.787 20:03:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:22.787 20:03:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:22.787 ************************************ 00:23:22.787 END TEST nvmf_aer 00:23:22.787 ************************************ 00:23:22.787 20:03:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:22.787 20:03:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:22.787 20:03:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:22.787 20:03:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.049 ************************************ 00:23:23.049 START TEST nvmf_async_init 00:23:23.049 ************************************ 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:23.049 * Looking for test storage... 00:23:23.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=f5c6238a05e84529996f11a68e74c75d 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:23.049 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:23.050 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:23.050 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:23.050 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.050 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:23.050 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.050 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:23.050 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:23.050 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:23:23.050 20:03:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:31.206 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:31.206 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:23:31.206 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:31.206 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:31.207 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:31.207 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:31.207 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:31.207 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:31.207 20:03:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:31.207 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:31.207 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:31.207 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:23:31.207 00:23:31.207 --- 10.0.0.2 ping statistics --- 00:23:31.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.207 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:23:31.207 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:31.207 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:31.207 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.392 ms 00:23:31.207 00:23:31.207 --- 10.0.0.1 ping statistics --- 00:23:31.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.207 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:23:31.207 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:31.207 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:23:31.207 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:31.207 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:31.207 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:31.207 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:31.207 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:31.207 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:31.207 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:31.207 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:31.207 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:31.207 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:31.207 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:31.207 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3763839 00:23:31.207 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3763839 00:23:31.208 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:31.208 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 3763839 ']' 00:23:31.208 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:31.208 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:31.208 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:31.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:31.208 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:31.208 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:31.208 [2024-07-24 20:03:18.135396] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:23:31.208 [2024-07-24 20:03:18.135468] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:31.208 EAL: No free 2048 kB hugepages reported on node 1 00:23:31.208 [2024-07-24 20:03:18.205308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.208 [2024-07-24 20:03:18.279105] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:31.208 [2024-07-24 20:03:18.279142] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:31.208 [2024-07-24 20:03:18.279150] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:31.208 [2024-07-24 20:03:18.279156] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:31.208 [2024-07-24 20:03:18.279162] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:31.208 [2024-07-24 20:03:18.279181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.208 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:31.208 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:23:31.208 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:31.208 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:31.208 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:31.208 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:31.208 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:31.208 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.208 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:31.208 [2024-07-24 20:03:18.950117] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:31.208 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.208 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:31.208 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.208 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:31.208 null0 00:23:31.208 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.208 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:31.208 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.208 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:31.208 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.208 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:31.208 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.208 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:31.208 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.208 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g f5c6238a05e84529996f11a68e74c75d 00:23:31.208 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.208 20:03:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:31.208 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.208 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:31.208 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.208 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:31.208 [2024-07-24 20:03:19.010374] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:31.208 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.208 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:31.208 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.208 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:31.469 nvme0n1 00:23:31.469 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.469 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:31.469 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.469 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:31.469 [ 00:23:31.469 { 00:23:31.469 "name": "nvme0n1", 00:23:31.469 "aliases": [ 00:23:31.469 "f5c6238a-05e8-4529-996f-11a68e74c75d" 00:23:31.469 ], 00:23:31.469 "product_name": "NVMe disk", 00:23:31.469 "block_size": 512, 00:23:31.469 "num_blocks": 2097152, 00:23:31.469 "uuid": "f5c6238a-05e8-4529-996f-11a68e74c75d", 00:23:31.469 "assigned_rate_limits": { 00:23:31.469 "rw_ios_per_sec": 0, 00:23:31.469 "rw_mbytes_per_sec": 0, 00:23:31.469 "r_mbytes_per_sec": 0, 00:23:31.469 "w_mbytes_per_sec": 0 00:23:31.469 }, 00:23:31.469 "claimed": false, 00:23:31.469 "zoned": false, 00:23:31.469 "supported_io_types": { 00:23:31.469 "read": true, 00:23:31.469 "write": true, 00:23:31.469 "unmap": false, 00:23:31.469 "flush": true, 00:23:31.469 "reset": true, 00:23:31.469 "nvme_admin": true, 00:23:31.469 "nvme_io": true, 00:23:31.469 "nvme_io_md": false, 00:23:31.469 "write_zeroes": true, 00:23:31.469 "zcopy": false, 00:23:31.469 "get_zone_info": false, 00:23:31.469 "zone_management": false, 00:23:31.469 "zone_append": false, 00:23:31.469 "compare": true, 00:23:31.469 "compare_and_write": true, 00:23:31.469 "abort": true, 00:23:31.469 "seek_hole": false, 00:23:31.469 "seek_data": false, 00:23:31.469 "copy": true, 00:23:31.469 "nvme_iov_md": false 00:23:31.469 }, 00:23:31.469 "memory_domains": [ 00:23:31.469 { 00:23:31.469 "dma_device_id": "system", 00:23:31.469 "dma_device_type": 1 00:23:31.469 } 00:23:31.469 ], 00:23:31.469 "driver_specific": { 00:23:31.469 "nvme": [ 00:23:31.469 { 00:23:31.469 "trid": { 00:23:31.469 "trtype": "TCP", 00:23:31.469 "adrfam": "IPv4", 00:23:31.469 "traddr": "10.0.0.2", 00:23:31.469 "trsvcid": "4420", 00:23:31.469 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:31.469 }, 00:23:31.469 "ctrlr_data": { 00:23:31.469 "cntlid": 1, 00:23:31.469 "vendor_id": "0x8086", 00:23:31.469 "model_number": "SPDK bdev Controller", 00:23:31.469 "serial_number": "00000000000000000000", 00:23:31.469 "firmware_revision": "24.09", 00:23:31.469 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:31.469 "oacs": { 00:23:31.469 "security": 0, 00:23:31.469 "format": 0, 00:23:31.469 "firmware": 0, 00:23:31.469 "ns_manage": 0 00:23:31.469 }, 00:23:31.469 "multi_ctrlr": true, 00:23:31.469 "ana_reporting": false 00:23:31.469 }, 00:23:31.469 "vs": { 00:23:31.469 "nvme_version": "1.3" 00:23:31.469 }, 00:23:31.469 "ns_data": { 00:23:31.469 "id": 1, 00:23:31.469 "can_share": true 00:23:31.469 } 00:23:31.469 } 00:23:31.469 ], 00:23:31.469 "mp_policy": "active_passive" 00:23:31.469 } 00:23:31.469 } 00:23:31.469 ] 00:23:31.469 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.469 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:31.469 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.469 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:31.469 [2024-07-24 20:03:19.280215] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:31.469 [2024-07-24 20:03:19.280276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26c5f40 (9): Bad file descriptor 00:23:31.469 [2024-07-24 20:03:19.412298] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:31.469 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.469 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:31.469 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.469 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:31.731 [ 00:23:31.731 { 00:23:31.731 "name": "nvme0n1", 00:23:31.731 "aliases": [ 00:23:31.731 "f5c6238a-05e8-4529-996f-11a68e74c75d" 00:23:31.731 ], 00:23:31.731 "product_name": "NVMe disk", 00:23:31.731 "block_size": 512, 00:23:31.731 "num_blocks": 2097152, 00:23:31.731 "uuid": "f5c6238a-05e8-4529-996f-11a68e74c75d", 00:23:31.731 "assigned_rate_limits": { 00:23:31.731 "rw_ios_per_sec": 0, 00:23:31.731 "rw_mbytes_per_sec": 0, 00:23:31.731 "r_mbytes_per_sec": 0, 00:23:31.731 "w_mbytes_per_sec": 0 00:23:31.731 }, 00:23:31.731 "claimed": false, 00:23:31.731 "zoned": false, 00:23:31.731 "supported_io_types": { 00:23:31.731 "read": true, 00:23:31.731 "write": true, 00:23:31.731 "unmap": false, 00:23:31.731 "flush": true, 00:23:31.731 "reset": true, 00:23:31.731 "nvme_admin": true, 00:23:31.731 "nvme_io": true, 00:23:31.731 "nvme_io_md": false, 00:23:31.731 "write_zeroes": true, 00:23:31.731 "zcopy": false, 00:23:31.731 "get_zone_info": false, 00:23:31.731 "zone_management": false, 00:23:31.731 "zone_append": false, 00:23:31.731 "compare": true, 00:23:31.731 "compare_and_write": true, 00:23:31.731 "abort": true, 00:23:31.731 "seek_hole": false, 00:23:31.731 "seek_data": false, 00:23:31.731 "copy": true, 00:23:31.731 "nvme_iov_md": false 00:23:31.731 }, 00:23:31.731 "memory_domains": [ 00:23:31.731 { 00:23:31.731 "dma_device_id": "system", 00:23:31.731 "dma_device_type": 1 00:23:31.731 } 00:23:31.731 ], 00:23:31.731 "driver_specific": { 00:23:31.731 "nvme": [ 00:23:31.731 { 00:23:31.731 "trid": { 00:23:31.731 "trtype": "TCP", 00:23:31.731 "adrfam": "IPv4", 00:23:31.731 "traddr": "10.0.0.2", 00:23:31.731 "trsvcid": "4420", 00:23:31.731 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:31.731 }, 00:23:31.731 "ctrlr_data": { 00:23:31.731 "cntlid": 2, 00:23:31.731 "vendor_id": "0x8086", 00:23:31.731 "model_number": "SPDK bdev Controller", 00:23:31.731 "serial_number": "00000000000000000000", 00:23:31.731 "firmware_revision": "24.09", 00:23:31.731 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:31.731 "oacs": { 00:23:31.731 "security": 0, 00:23:31.731 "format": 0, 00:23:31.731 "firmware": 0, 00:23:31.731 "ns_manage": 0 00:23:31.731 }, 00:23:31.731 "multi_ctrlr": true, 00:23:31.731 "ana_reporting": false 00:23:31.731 }, 00:23:31.731 "vs": { 00:23:31.731 "nvme_version": "1.3" 00:23:31.731 }, 00:23:31.731 "ns_data": { 00:23:31.731 "id": 1, 00:23:31.731 "can_share": true 00:23:31.731 } 00:23:31.731 } 00:23:31.731 ], 00:23:31.731 "mp_policy": "active_passive" 00:23:31.731 } 00:23:31.731 } 00:23:31.731 ] 00:23:31.731 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.731 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.731 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.731 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:31.731 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.731 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:31.731 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.GSefoyF0C0 00:23:31.731 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:31.731 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.GSefoyF0C0 00:23:31.731 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:31.731 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.731 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:31.731 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.731 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:31.731 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.731 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:31.731 [2024-07-24 20:03:19.480837] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:31.731 [2024-07-24 20:03:19.480949] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:31.731 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.731 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GSefoyF0C0 00:23:31.731 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.731 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:31.731 [2024-07-24 20:03:19.492859] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:31.731 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.731 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GSefoyF0C0 00:23:31.731 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.731 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:31.731 [2024-07-24 20:03:19.504910] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:31.731 [2024-07-24 20:03:19.504947] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:31.731 nvme0n1 00:23:31.731 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.731 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:31.731 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.731 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:31.731 [ 00:23:31.731 { 00:23:31.731 "name": "nvme0n1", 00:23:31.731 "aliases": [ 00:23:31.731 "f5c6238a-05e8-4529-996f-11a68e74c75d" 00:23:31.731 ], 00:23:31.731 "product_name": "NVMe disk", 00:23:31.731 "block_size": 512, 00:23:31.731 "num_blocks": 2097152, 00:23:31.731 "uuid": "f5c6238a-05e8-4529-996f-11a68e74c75d", 00:23:31.731 "assigned_rate_limits": { 00:23:31.731 "rw_ios_per_sec": 0, 00:23:31.731 "rw_mbytes_per_sec": 0, 00:23:31.731 "r_mbytes_per_sec": 0, 00:23:31.731 "w_mbytes_per_sec": 0 00:23:31.731 }, 00:23:31.731 "claimed": false, 00:23:31.731 "zoned": false, 00:23:31.731 "supported_io_types": { 00:23:31.731 "read": true, 00:23:31.731 "write": true, 00:23:31.731 "unmap": false, 00:23:31.731 "flush": true, 00:23:31.731 "reset": true, 00:23:31.731 "nvme_admin": true, 00:23:31.731 "nvme_io": true, 00:23:31.731 "nvme_io_md": false, 00:23:31.731 "write_zeroes": true, 00:23:31.731 "zcopy": false, 00:23:31.731 "get_zone_info": false, 00:23:31.731 "zone_management": false, 00:23:31.731 "zone_append": false, 00:23:31.731 "compare": true, 00:23:31.731 "compare_and_write": true, 00:23:31.731 "abort": true, 00:23:31.731 "seek_hole": false, 00:23:31.731 "seek_data": false, 00:23:31.731 "copy": true, 00:23:31.731 "nvme_iov_md": false 00:23:31.731 }, 00:23:31.731 "memory_domains": [ 00:23:31.731 { 00:23:31.731 "dma_device_id": "system", 00:23:31.731 "dma_device_type": 1 00:23:31.731 } 00:23:31.731 ], 00:23:31.731 "driver_specific": { 00:23:31.731 "nvme": [ 00:23:31.731 { 00:23:31.731 "trid": { 00:23:31.731 "trtype": "TCP", 00:23:31.731 "adrfam": "IPv4", 00:23:31.731 "traddr": "10.0.0.2", 00:23:31.731 "trsvcid": "4421", 00:23:31.731 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:31.731 }, 00:23:31.731 "ctrlr_data": { 00:23:31.731 "cntlid": 3, 00:23:31.731 "vendor_id": "0x8086", 00:23:31.732 "model_number": "SPDK bdev Controller", 00:23:31.732 "serial_number": "00000000000000000000", 00:23:31.732 "firmware_revision": "24.09", 00:23:31.732 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:31.732 "oacs": { 00:23:31.732 "security": 0, 00:23:31.732 "format": 0, 00:23:31.732 "firmware": 0, 00:23:31.732 "ns_manage": 0 00:23:31.732 }, 00:23:31.732 "multi_ctrlr": true, 00:23:31.732 "ana_reporting": false 00:23:31.732 }, 00:23:31.732 "vs": { 00:23:31.732 "nvme_version": "1.3" 00:23:31.732 }, 00:23:31.732 "ns_data": { 00:23:31.732 "id": 1, 00:23:31.732 "can_share": true 00:23:31.732 } 00:23:31.732 } 00:23:31.732 ], 00:23:31.732 "mp_policy": "active_passive" 00:23:31.732 } 00:23:31.732 } 00:23:31.732 ] 00:23:31.732 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.732 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.732 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.732 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:31.732 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.732 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.GSefoyF0C0 00:23:31.732 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:23:31.732 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:23:31.732 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:31.732 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:23:31.732 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:31.732 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:23:31.732 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:31.732 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:31.732 rmmod nvme_tcp 00:23:31.732 rmmod nvme_fabrics 00:23:31.732 rmmod nvme_keyring 00:23:31.732 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:31.993 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:23:31.993 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:23:31.993 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3763839 ']' 00:23:31.993 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3763839 00:23:31.993 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 3763839 ']' 00:23:31.993 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 3763839 00:23:31.993 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:23:31.993 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:31.993 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3763839 00:23:31.993 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:31.993 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:31.993 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3763839' 00:23:31.993 killing process with pid 3763839 00:23:31.993 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 3763839 00:23:31.993 [2024-07-24 20:03:19.745973] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:31.993 [2024-07-24 20:03:19.745999] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:31.993 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 3763839 00:23:31.993 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:31.993 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:31.993 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:31.993 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:31.993 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:31.993 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.993 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:31.993 20:03:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.543 20:03:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:34.543 00:23:34.543 real 0m11.176s 00:23:34.543 user 0m3.977s 00:23:34.543 sys 0m5.651s 00:23:34.543 20:03:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:34.543 20:03:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:34.543 ************************************ 00:23:34.543 END TEST nvmf_async_init 00:23:34.543 ************************************ 00:23:34.543 20:03:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:34.543 20:03:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:34.543 20:03:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:34.543 20:03:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.543 ************************************ 00:23:34.543 START TEST dma 00:23:34.543 ************************************ 00:23:34.543 20:03:22 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:34.543 * Looking for test storage... 00:23:34.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:34.543 20:03:22 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:34.543 20:03:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:34.543 20:03:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:34.543 20:03:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:34.543 20:03:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:34.543 20:03:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:34.543 20:03:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:34.543 20:03:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:34.543 20:03:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:34.543 20:03:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:34.543 20:03:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:34.543 20:03:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:34.543 20:03:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:34.543 20:03:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:34.543 20:03:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:34.543 20:03:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:34.543 20:03:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:34.543 20:03:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:34.543 20:03:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:34.543 20:03:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:34.543 20:03:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:34.543 20:03:22 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:34.543 20:03:22 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.543 20:03:22 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:34.544 00:23:34.544 real 0m0.135s 00:23:34.544 user 0m0.064s 00:23:34.544 sys 0m0.078s 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:34.544 ************************************ 00:23:34.544 END TEST dma 00:23:34.544 ************************************ 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.544 ************************************ 00:23:34.544 START TEST nvmf_identify 00:23:34.544 ************************************ 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:34.544 * Looking for test storage... 00:23:34.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:23:34.544 20:03:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:42.751 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:42.751 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:42.751 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:42.751 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:42.751 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:42.752 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:42.752 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.694 ms 00:23:42.752 00:23:42.752 --- 10.0.0.2 ping statistics --- 00:23:42.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.752 rtt min/avg/max/mdev = 0.694/0.694/0.694/0.000 ms 00:23:42.752 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:42.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:42.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.396 ms 00:23:42.752 00:23:42.752 --- 10.0.0.1 ping statistics --- 00:23:42.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.752 rtt min/avg/max/mdev = 0.396/0.396/0.396/0.000 ms 00:23:42.752 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:42.752 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:23:42.752 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:42.752 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:42.752 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:42.752 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:42.752 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:42.752 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:42.752 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:42.752 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:42.752 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:42.752 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:42.752 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3768427 00:23:42.752 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:42.752 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:42.752 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3768427 00:23:42.752 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 3768427 ']' 00:23:42.752 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.752 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:42.752 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.752 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:42.752 20:03:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:42.752 [2024-07-24 20:03:29.694567] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:23:42.752 [2024-07-24 20:03:29.694633] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:42.752 EAL: No free 2048 kB hugepages reported on node 1 00:23:42.752 [2024-07-24 20:03:29.765035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:42.752 [2024-07-24 20:03:29.841623] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:42.752 [2024-07-24 20:03:29.841665] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:42.752 [2024-07-24 20:03:29.841673] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:42.752 [2024-07-24 20:03:29.841680] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:42.752 [2024-07-24 20:03:29.841685] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:42.752 [2024-07-24 20:03:29.841828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:42.752 [2024-07-24 20:03:29.841949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:42.752 [2024-07-24 20:03:29.842111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.752 [2024-07-24 20:03:29.842112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:42.752 20:03:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:42.752 20:03:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:23:42.752 20:03:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:42.752 20:03:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.752 20:03:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:42.752 [2024-07-24 20:03:30.485028] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:42.752 20:03:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.752 20:03:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:42.752 20:03:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:42.752 20:03:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:42.752 20:03:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:42.752 20:03:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.752 20:03:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:42.752 Malloc0 00:23:42.752 20:03:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.752 20:03:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:42.752 20:03:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.752 20:03:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:42.752 20:03:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.752 20:03:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:42.752 20:03:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.752 20:03:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:42.752 20:03:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.752 20:03:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:42.752 20:03:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.752 20:03:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:42.752 [2024-07-24 20:03:30.584316] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:42.752 20:03:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.752 20:03:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:42.752 20:03:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.752 20:03:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:42.752 20:03:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.752 20:03:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:42.752 20:03:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.752 20:03:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:42.752 [ 00:23:42.752 { 00:23:42.752 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:42.752 "subtype": "Discovery", 00:23:42.752 "listen_addresses": [ 00:23:42.752 { 00:23:42.752 "trtype": "TCP", 00:23:42.752 "adrfam": "IPv4", 00:23:42.752 "traddr": "10.0.0.2", 00:23:42.752 "trsvcid": "4420" 00:23:42.752 } 00:23:42.752 ], 00:23:42.752 "allow_any_host": true, 00:23:42.752 "hosts": [] 00:23:42.752 }, 00:23:42.752 { 00:23:42.752 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.752 "subtype": "NVMe", 00:23:42.752 "listen_addresses": [ 00:23:42.752 { 00:23:42.752 "trtype": "TCP", 00:23:42.752 "adrfam": "IPv4", 00:23:42.752 "traddr": "10.0.0.2", 00:23:42.752 "trsvcid": "4420" 00:23:42.752 } 00:23:42.752 ], 00:23:42.752 "allow_any_host": true, 00:23:42.752 "hosts": [], 00:23:42.752 "serial_number": "SPDK00000000000001", 00:23:42.752 "model_number": "SPDK bdev Controller", 00:23:42.752 "max_namespaces": 32, 00:23:42.752 "min_cntlid": 1, 00:23:42.752 "max_cntlid": 65519, 00:23:42.752 "namespaces": [ 00:23:42.752 { 00:23:42.752 "nsid": 1, 00:23:42.752 "bdev_name": "Malloc0", 00:23:42.752 "name": "Malloc0", 00:23:42.752 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:42.752 "eui64": "ABCDEF0123456789", 00:23:42.752 "uuid": "4c2bc66a-be73-489c-a1e1-24b41b24a9d1" 00:23:42.752 } 00:23:42.752 ] 00:23:42.752 } 00:23:42.752 ] 00:23:42.752 20:03:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.752 20:03:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:42.752 [2024-07-24 20:03:30.645712] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:23:42.752 [2024-07-24 20:03:30.645756] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3768585 ] 00:23:42.752 EAL: No free 2048 kB hugepages reported on node 1 00:23:42.752 [2024-07-24 20:03:30.676893] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:42.752 [2024-07-24 20:03:30.676942] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:42.752 [2024-07-24 20:03:30.676947] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:42.752 [2024-07-24 20:03:30.676958] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:42.752 [2024-07-24 20:03:30.676965] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:42.753 [2024-07-24 20:03:30.680228] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:42.753 [2024-07-24 20:03:30.680255] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xbf4ec0 0 00:23:42.753 [2024-07-24 20:03:30.688209] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:42.753 [2024-07-24 20:03:30.688238] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:42.753 [2024-07-24 20:03:30.688243] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:42.753 [2024-07-24 20:03:30.688246] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:42.753 [2024-07-24 20:03:30.688284] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:42.753 [2024-07-24 20:03:30.688290] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:42.753 [2024-07-24 20:03:30.688294] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbf4ec0) 00:23:42.753 [2024-07-24 20:03:30.688307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:42.753 [2024-07-24 20:03:30.688323] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc77e40, cid 0, qid 0 00:23:42.753 [2024-07-24 20:03:30.696212] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:42.753 [2024-07-24 20:03:30.696221] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:42.753 [2024-07-24 20:03:30.696224] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:42.753 [2024-07-24 20:03:30.696229] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc77e40) on tqpair=0xbf4ec0 00:23:42.753 [2024-07-24 20:03:30.696238] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:42.753 [2024-07-24 20:03:30.696244] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:42.753 [2024-07-24 20:03:30.696249] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:42.753 [2024-07-24 20:03:30.696262] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:42.753 [2024-07-24 20:03:30.696266] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:42.753 [2024-07-24 20:03:30.696269] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbf4ec0) 00:23:42.753 [2024-07-24 20:03:30.696277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.753 [2024-07-24 20:03:30.696289] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc77e40, cid 0, qid 0 00:23:42.753 [2024-07-24 20:03:30.696536] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:42.753 [2024-07-24 20:03:30.696543] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:42.753 [2024-07-24 20:03:30.696547] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:42.753 [2024-07-24 20:03:30.696550] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc77e40) on tqpair=0xbf4ec0 00:23:42.753 [2024-07-24 20:03:30.696559] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:42.753 [2024-07-24 20:03:30.696567] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:42.753 [2024-07-24 20:03:30.696574] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:42.753 [2024-07-24 20:03:30.696577] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:42.753 [2024-07-24 20:03:30.696581] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbf4ec0) 00:23:42.753 [2024-07-24 20:03:30.696588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.753 [2024-07-24 20:03:30.696600] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc77e40, cid 0, qid 0 00:23:42.753 [2024-07-24 20:03:30.696849] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:42.753 [2024-07-24 20:03:30.696855] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:42.753 [2024-07-24 20:03:30.696858] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:42.753 [2024-07-24 20:03:30.696862] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc77e40) on tqpair=0xbf4ec0 00:23:42.753 [2024-07-24 20:03:30.696867] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:42.753 [2024-07-24 20:03:30.696878] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:42.753 [2024-07-24 20:03:30.696885] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:42.753 [2024-07-24 20:03:30.696889] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:42.753 [2024-07-24 20:03:30.696892] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbf4ec0) 00:23:42.753 [2024-07-24 20:03:30.696899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.753 [2024-07-24 20:03:30.696910] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc77e40, cid 0, qid 0 00:23:42.753 [2024-07-24 20:03:30.697167] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:42.753 [2024-07-24 20:03:30.697173] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:42.753 [2024-07-24 20:03:30.697177] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:42.753 [2024-07-24 20:03:30.697180] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc77e40) on tqpair=0xbf4ec0 00:23:42.753 [2024-07-24 20:03:30.697185] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:42.753 [2024-07-24 20:03:30.697194] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:42.753 [2024-07-24 20:03:30.697198] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:42.753 [2024-07-24 20:03:30.697209] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbf4ec0) 00:23:42.753 [2024-07-24 20:03:30.697216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.753 [2024-07-24 20:03:30.697227] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc77e40, cid 0, qid 0 00:23:42.753 [2024-07-24 20:03:30.697556] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:42.753 [2024-07-24 20:03:30.697562] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:42.753 [2024-07-24 20:03:30.697565] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:42.753 [2024-07-24 20:03:30.697569] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc77e40) on tqpair=0xbf4ec0 00:23:42.753 [2024-07-24 20:03:30.697573] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:42.753 [2024-07-24 20:03:30.697578] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:42.753 [2024-07-24 20:03:30.697585] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:42.753 [2024-07-24 20:03:30.697691] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:42.753 [2024-07-24 20:03:30.697695] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:42.753 [2024-07-24 20:03:30.697703] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:42.753 [2024-07-24 20:03:30.697707] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:42.753 [2024-07-24 20:03:30.697710] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbf4ec0) 00:23:42.753 [2024-07-24 20:03:30.697717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.753 [2024-07-24 20:03:30.697727] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc77e40, cid 0, qid 0 00:23:42.753 [2024-07-24 20:03:30.697962] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:42.753 [2024-07-24 20:03:30.697969] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:42.753 [2024-07-24 20:03:30.697975] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:42.753 [2024-07-24 20:03:30.697979] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc77e40) on tqpair=0xbf4ec0 00:23:42.753 [2024-07-24 20:03:30.697983] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:42.753 [2024-07-24 20:03:30.697993] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:42.753 [2024-07-24 20:03:30.697996] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:42.753 [2024-07-24 20:03:30.698000] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbf4ec0) 00:23:42.753 [2024-07-24 20:03:30.698006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.753 [2024-07-24 20:03:30.698016] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc77e40, cid 0, qid 0 00:23:42.753 [2024-07-24 20:03:30.698261] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:42.753 [2024-07-24 20:03:30.698268] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:42.753 [2024-07-24 20:03:30.698271] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:42.753 [2024-07-24 20:03:30.698275] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc77e40) on tqpair=0xbf4ec0 00:23:42.753 [2024-07-24 20:03:30.698279] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:42.753 [2024-07-24 20:03:30.698284] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:42.753 [2024-07-24 20:03:30.698292] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:42.753 [2024-07-24 20:03:30.698305] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:42.753 [2024-07-24 20:03:30.698314] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:42.753 [2024-07-24 20:03:30.698318] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbf4ec0) 00:23:42.753 [2024-07-24 20:03:30.698325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.753 [2024-07-24 20:03:30.698336] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc77e40, cid 0, qid 0 00:23:42.753 [2024-07-24 20:03:30.698608] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:42.753 [2024-07-24 20:03:30.698615] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:42.753 [2024-07-24 20:03:30.698619] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:42.753 [2024-07-24 20:03:30.698623] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbf4ec0): datao=0, datal=4096, cccid=0 00:23:42.753 [2024-07-24 20:03:30.698627] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc77e40) on tqpair(0xbf4ec0): expected_datao=0, payload_size=4096 00:23:42.753 [2024-07-24 20:03:30.698632] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:42.753 [2024-07-24 20:03:30.698752] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:42.754 [2024-07-24 20:03:30.698756] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:43.017 [2024-07-24 20:03:30.739435] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.017 [2024-07-24 20:03:30.739448] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.017 [2024-07-24 20:03:30.739451] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.017 [2024-07-24 20:03:30.739456] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc77e40) on tqpair=0xbf4ec0 00:23:43.017 [2024-07-24 20:03:30.739464] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:43.017 [2024-07-24 20:03:30.739469] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:43.017 [2024-07-24 20:03:30.739477] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:43.017 [2024-07-24 20:03:30.739482] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:43.017 [2024-07-24 20:03:30.739487] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:43.017 [2024-07-24 20:03:30.739492] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:43.017 [2024-07-24 20:03:30.739500] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:43.017 [2024-07-24 20:03:30.739511] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.017 [2024-07-24 20:03:30.739515] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.017 [2024-07-24 20:03:30.739519] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbf4ec0) 00:23:43.017 [2024-07-24 20:03:30.739527] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:43.017 [2024-07-24 20:03:30.739539] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc77e40, cid 0, qid 0 00:23:43.017 [2024-07-24 20:03:30.739699] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.017 [2024-07-24 20:03:30.739705] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.017 [2024-07-24 20:03:30.739708] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.017 [2024-07-24 20:03:30.739712] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc77e40) on tqpair=0xbf4ec0 00:23:43.018 [2024-07-24 20:03:30.739720] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.018 [2024-07-24 20:03:30.739723] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.018 [2024-07-24 20:03:30.739727] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbf4ec0) 00:23:43.018 [2024-07-24 20:03:30.739733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.018 [2024-07-24 20:03:30.739739] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.018 [2024-07-24 20:03:30.739743] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.018 [2024-07-24 20:03:30.739746] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xbf4ec0) 00:23:43.018 [2024-07-24 20:03:30.739752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.018 [2024-07-24 20:03:30.739758] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.018 [2024-07-24 20:03:30.739762] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.018 [2024-07-24 20:03:30.739765] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xbf4ec0) 00:23:43.018 [2024-07-24 20:03:30.739771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.018 [2024-07-24 20:03:30.739776] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.018 [2024-07-24 20:03:30.739780] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.018 [2024-07-24 20:03:30.739783] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf4ec0) 00:23:43.018 [2024-07-24 20:03:30.739789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.018 [2024-07-24 20:03:30.739794] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:43.018 [2024-07-24 20:03:30.739804] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:43.018 [2024-07-24 20:03:30.739813] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.018 [2024-07-24 20:03:30.739816] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbf4ec0) 00:23:43.018 [2024-07-24 20:03:30.739823] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.018 [2024-07-24 20:03:30.739835] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc77e40, cid 0, qid 0 00:23:43.018 [2024-07-24 20:03:30.739840] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc77fc0, cid 1, qid 0 00:23:43.018 [2024-07-24 20:03:30.739845] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc78140, cid 2, qid 0 00:23:43.018 [2024-07-24 20:03:30.739849] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc782c0, cid 3, qid 0 00:23:43.018 [2024-07-24 20:03:30.739854] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc78440, cid 4, qid 0 00:23:43.018 [2024-07-24 20:03:30.740133] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.018 [2024-07-24 20:03:30.740140] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.018 [2024-07-24 20:03:30.740143] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.018 [2024-07-24 20:03:30.740147] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc78440) on tqpair=0xbf4ec0 00:23:43.018 [2024-07-24 20:03:30.740152] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:43.018 [2024-07-24 20:03:30.740156] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:43.018 [2024-07-24 20:03:30.740168] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.018 [2024-07-24 20:03:30.740171] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbf4ec0) 00:23:43.018 [2024-07-24 20:03:30.740178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.018 [2024-07-24 20:03:30.740188] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc78440, cid 4, qid 0 00:23:43.018 [2024-07-24 20:03:30.744210] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:43.018 [2024-07-24 20:03:30.744218] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:43.018 [2024-07-24 20:03:30.744221] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:43.018 [2024-07-24 20:03:30.744225] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbf4ec0): datao=0, datal=4096, cccid=4 00:23:43.018 [2024-07-24 20:03:30.744229] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc78440) on tqpair(0xbf4ec0): expected_datao=0, payload_size=4096 00:23:43.018 [2024-07-24 20:03:30.744234] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.018 [2024-07-24 20:03:30.744240] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:43.018 [2024-07-24 20:03:30.744244] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:43.018 [2024-07-24 20:03:30.744249] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.018 [2024-07-24 20:03:30.744255] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.018 [2024-07-24 20:03:30.744259] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.018 [2024-07-24 20:03:30.744262] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc78440) on tqpair=0xbf4ec0 00:23:43.018 [2024-07-24 20:03:30.744274] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:43.018 [2024-07-24 20:03:30.744296] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.018 [2024-07-24 20:03:30.744300] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbf4ec0) 00:23:43.018 [2024-07-24 20:03:30.744307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.018 [2024-07-24 20:03:30.744316] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.018 [2024-07-24 20:03:30.744320] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.018 [2024-07-24 20:03:30.744323] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xbf4ec0) 00:23:43.018 [2024-07-24 20:03:30.744329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.018 [2024-07-24 20:03:30.744344] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc78440, cid 4, qid 0 00:23:43.018 [2024-07-24 20:03:30.744350] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc785c0, cid 5, qid 0 00:23:43.018 [2024-07-24 20:03:30.744660] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:43.018 [2024-07-24 20:03:30.744666] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:43.018 [2024-07-24 20:03:30.744670] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:43.018 [2024-07-24 20:03:30.744673] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbf4ec0): datao=0, datal=1024, cccid=4 00:23:43.018 [2024-07-24 20:03:30.744678] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc78440) on tqpair(0xbf4ec0): expected_datao=0, payload_size=1024 00:23:43.018 [2024-07-24 20:03:30.744682] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.018 [2024-07-24 20:03:30.744689] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:43.018 [2024-07-24 20:03:30.744692] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:43.018 [2024-07-24 20:03:30.744698] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.018 [2024-07-24 20:03:30.744703] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.018 [2024-07-24 20:03:30.744707] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.018 [2024-07-24 20:03:30.744711] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc785c0) on tqpair=0xbf4ec0 00:23:43.018 [2024-07-24 20:03:30.787211] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.018 [2024-07-24 20:03:30.787222] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.018 [2024-07-24 20:03:30.787225] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.018 [2024-07-24 20:03:30.787229] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc78440) on tqpair=0xbf4ec0 00:23:43.018 [2024-07-24 20:03:30.787247] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.018 [2024-07-24 20:03:30.787251] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbf4ec0) 00:23:43.018 [2024-07-24 20:03:30.787259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.018 [2024-07-24 20:03:30.787274] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc78440, cid 4, qid 0 00:23:43.018 [2024-07-24 20:03:30.787492] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:43.018 [2024-07-24 20:03:30.787498] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:43.018 [2024-07-24 20:03:30.787502] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:43.018 [2024-07-24 20:03:30.787505] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbf4ec0): datao=0, datal=3072, cccid=4 00:23:43.018 [2024-07-24 20:03:30.787510] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc78440) on tqpair(0xbf4ec0): expected_datao=0, payload_size=3072 00:23:43.018 [2024-07-24 20:03:30.787514] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.018 [2024-07-24 20:03:30.787570] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:43.018 [2024-07-24 20:03:30.787574] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:43.018 [2024-07-24 20:03:30.828471] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.018 [2024-07-24 20:03:30.828482] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.018 [2024-07-24 20:03:30.828486] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.018 [2024-07-24 20:03:30.828493] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc78440) on tqpair=0xbf4ec0 00:23:43.018 [2024-07-24 20:03:30.828503] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.018 [2024-07-24 20:03:30.828507] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbf4ec0) 00:23:43.018 [2024-07-24 20:03:30.828514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.018 [2024-07-24 20:03:30.828529] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc78440, cid 4, qid 0 00:23:43.018 [2024-07-24 20:03:30.828723] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:43.018 [2024-07-24 20:03:30.828729] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:43.018 [2024-07-24 20:03:30.828733] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:43.018 [2024-07-24 20:03:30.828736] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbf4ec0): datao=0, datal=8, cccid=4 00:23:43.018 [2024-07-24 20:03:30.828741] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc78440) on tqpair(0xbf4ec0): expected_datao=0, payload_size=8 00:23:43.018 [2024-07-24 20:03:30.828745] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.018 [2024-07-24 20:03:30.828752] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:43.018 [2024-07-24 20:03:30.828755] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:43.019 [2024-07-24 20:03:30.869394] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.019 [2024-07-24 20:03:30.869407] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.019 [2024-07-24 20:03:30.869410] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.019 [2024-07-24 20:03:30.869414] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc78440) on tqpair=0xbf4ec0 00:23:43.019 ===================================================== 00:23:43.019 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:43.019 ===================================================== 00:23:43.019 Controller Capabilities/Features 00:23:43.019 ================================ 00:23:43.019 Vendor ID: 0000 00:23:43.019 Subsystem Vendor ID: 0000 00:23:43.019 Serial Number: .................... 00:23:43.019 Model Number: ........................................ 00:23:43.019 Firmware Version: 24.09 00:23:43.019 Recommended Arb Burst: 0 00:23:43.019 IEEE OUI Identifier: 00 00 00 00:23:43.019 Multi-path I/O 00:23:43.019 May have multiple subsystem ports: No 00:23:43.019 May have multiple controllers: No 00:23:43.019 Associated with SR-IOV VF: No 00:23:43.019 Max Data Transfer Size: 131072 00:23:43.019 Max Number of Namespaces: 0 00:23:43.019 Max Number of I/O Queues: 1024 00:23:43.019 NVMe Specification Version (VS): 1.3 00:23:43.019 NVMe Specification Version (Identify): 1.3 00:23:43.019 Maximum Queue Entries: 128 00:23:43.019 Contiguous Queues Required: Yes 00:23:43.019 Arbitration Mechanisms Supported 00:23:43.019 Weighted Round Robin: Not Supported 00:23:43.019 Vendor Specific: Not Supported 00:23:43.019 Reset Timeout: 15000 ms 00:23:43.019 Doorbell Stride: 4 bytes 00:23:43.019 NVM Subsystem Reset: Not Supported 00:23:43.019 Command Sets Supported 00:23:43.019 NVM Command Set: Supported 00:23:43.019 Boot Partition: Not Supported 00:23:43.019 Memory Page Size Minimum: 4096 bytes 00:23:43.019 Memory Page Size Maximum: 4096 bytes 00:23:43.019 Persistent Memory Region: Not Supported 00:23:43.019 Optional Asynchronous Events Supported 00:23:43.019 Namespace Attribute Notices: Not Supported 00:23:43.019 Firmware Activation Notices: Not Supported 00:23:43.019 ANA Change Notices: Not Supported 00:23:43.019 PLE Aggregate Log Change Notices: Not Supported 00:23:43.019 LBA Status Info Alert Notices: Not Supported 00:23:43.019 EGE Aggregate Log Change Notices: Not Supported 00:23:43.019 Normal NVM Subsystem Shutdown event: Not Supported 00:23:43.019 Zone Descriptor Change Notices: Not Supported 00:23:43.019 Discovery Log Change Notices: Supported 00:23:43.019 Controller Attributes 00:23:43.019 128-bit Host Identifier: Not Supported 00:23:43.019 Non-Operational Permissive Mode: Not Supported 00:23:43.019 NVM Sets: Not Supported 00:23:43.019 Read Recovery Levels: Not Supported 00:23:43.019 Endurance Groups: Not Supported 00:23:43.019 Predictable Latency Mode: Not Supported 00:23:43.019 Traffic Based Keep ALive: Not Supported 00:23:43.019 Namespace Granularity: Not Supported 00:23:43.019 SQ Associations: Not Supported 00:23:43.019 UUID List: Not Supported 00:23:43.019 Multi-Domain Subsystem: Not Supported 00:23:43.019 Fixed Capacity Management: Not Supported 00:23:43.019 Variable Capacity Management: Not Supported 00:23:43.019 Delete Endurance Group: Not Supported 00:23:43.019 Delete NVM Set: Not Supported 00:23:43.019 Extended LBA Formats Supported: Not Supported 00:23:43.019 Flexible Data Placement Supported: Not Supported 00:23:43.019 00:23:43.019 Controller Memory Buffer Support 00:23:43.019 ================================ 00:23:43.019 Supported: No 00:23:43.019 00:23:43.019 Persistent Memory Region Support 00:23:43.019 ================================ 00:23:43.019 Supported: No 00:23:43.019 00:23:43.019 Admin Command Set Attributes 00:23:43.019 ============================ 00:23:43.019 Security Send/Receive: Not Supported 00:23:43.019 Format NVM: Not Supported 00:23:43.019 Firmware Activate/Download: Not Supported 00:23:43.019 Namespace Management: Not Supported 00:23:43.019 Device Self-Test: Not Supported 00:23:43.019 Directives: Not Supported 00:23:43.019 NVMe-MI: Not Supported 00:23:43.019 Virtualization Management: Not Supported 00:23:43.019 Doorbell Buffer Config: Not Supported 00:23:43.019 Get LBA Status Capability: Not Supported 00:23:43.019 Command & Feature Lockdown Capability: Not Supported 00:23:43.019 Abort Command Limit: 1 00:23:43.019 Async Event Request Limit: 4 00:23:43.019 Number of Firmware Slots: N/A 00:23:43.019 Firmware Slot 1 Read-Only: N/A 00:23:43.019 Firmware Activation Without Reset: N/A 00:23:43.019 Multiple Update Detection Support: N/A 00:23:43.019 Firmware Update Granularity: No Information Provided 00:23:43.019 Per-Namespace SMART Log: No 00:23:43.019 Asymmetric Namespace Access Log Page: Not Supported 00:23:43.019 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:43.019 Command Effects Log Page: Not Supported 00:23:43.019 Get Log Page Extended Data: Supported 00:23:43.019 Telemetry Log Pages: Not Supported 00:23:43.019 Persistent Event Log Pages: Not Supported 00:23:43.019 Supported Log Pages Log Page: May Support 00:23:43.019 Commands Supported & Effects Log Page: Not Supported 00:23:43.019 Feature Identifiers & Effects Log Page:May Support 00:23:43.019 NVMe-MI Commands & Effects Log Page: May Support 00:23:43.019 Data Area 4 for Telemetry Log: Not Supported 00:23:43.019 Error Log Page Entries Supported: 128 00:23:43.019 Keep Alive: Not Supported 00:23:43.019 00:23:43.019 NVM Command Set Attributes 00:23:43.019 ========================== 00:23:43.019 Submission Queue Entry Size 00:23:43.019 Max: 1 00:23:43.019 Min: 1 00:23:43.019 Completion Queue Entry Size 00:23:43.019 Max: 1 00:23:43.019 Min: 1 00:23:43.019 Number of Namespaces: 0 00:23:43.019 Compare Command: Not Supported 00:23:43.019 Write Uncorrectable Command: Not Supported 00:23:43.019 Dataset Management Command: Not Supported 00:23:43.019 Write Zeroes Command: Not Supported 00:23:43.019 Set Features Save Field: Not Supported 00:23:43.019 Reservations: Not Supported 00:23:43.019 Timestamp: Not Supported 00:23:43.019 Copy: Not Supported 00:23:43.019 Volatile Write Cache: Not Present 00:23:43.019 Atomic Write Unit (Normal): 1 00:23:43.019 Atomic Write Unit (PFail): 1 00:23:43.019 Atomic Compare & Write Unit: 1 00:23:43.019 Fused Compare & Write: Supported 00:23:43.019 Scatter-Gather List 00:23:43.019 SGL Command Set: Supported 00:23:43.019 SGL Keyed: Supported 00:23:43.019 SGL Bit Bucket Descriptor: Not Supported 00:23:43.019 SGL Metadata Pointer: Not Supported 00:23:43.019 Oversized SGL: Not Supported 00:23:43.019 SGL Metadata Address: Not Supported 00:23:43.019 SGL Offset: Supported 00:23:43.019 Transport SGL Data Block: Not Supported 00:23:43.019 Replay Protected Memory Block: Not Supported 00:23:43.019 00:23:43.019 Firmware Slot Information 00:23:43.019 ========================= 00:23:43.019 Active slot: 0 00:23:43.019 00:23:43.019 00:23:43.019 Error Log 00:23:43.019 ========= 00:23:43.019 00:23:43.019 Active Namespaces 00:23:43.019 ================= 00:23:43.019 Discovery Log Page 00:23:43.019 ================== 00:23:43.019 Generation Counter: 2 00:23:43.019 Number of Records: 2 00:23:43.019 Record Format: 0 00:23:43.019 00:23:43.019 Discovery Log Entry 0 00:23:43.019 ---------------------- 00:23:43.019 Transport Type: 3 (TCP) 00:23:43.019 Address Family: 1 (IPv4) 00:23:43.019 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:43.019 Entry Flags: 00:23:43.019 Duplicate Returned Information: 1 00:23:43.019 Explicit Persistent Connection Support for Discovery: 1 00:23:43.019 Transport Requirements: 00:23:43.019 Secure Channel: Not Required 00:23:43.019 Port ID: 0 (0x0000) 00:23:43.019 Controller ID: 65535 (0xffff) 00:23:43.019 Admin Max SQ Size: 128 00:23:43.019 Transport Service Identifier: 4420 00:23:43.019 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:43.019 Transport Address: 10.0.0.2 00:23:43.019 Discovery Log Entry 1 00:23:43.019 ---------------------- 00:23:43.019 Transport Type: 3 (TCP) 00:23:43.019 Address Family: 1 (IPv4) 00:23:43.019 Subsystem Type: 2 (NVM Subsystem) 00:23:43.019 Entry Flags: 00:23:43.019 Duplicate Returned Information: 0 00:23:43.019 Explicit Persistent Connection Support for Discovery: 0 00:23:43.019 Transport Requirements: 00:23:43.019 Secure Channel: Not Required 00:23:43.019 Port ID: 0 (0x0000) 00:23:43.019 Controller ID: 65535 (0xffff) 00:23:43.019 Admin Max SQ Size: 128 00:23:43.019 Transport Service Identifier: 4420 00:23:43.020 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:43.020 Transport Address: 10.0.0.2 [2024-07-24 20:03:30.869500] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:43.020 [2024-07-24 20:03:30.869510] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc77e40) on tqpair=0xbf4ec0 00:23:43.020 [2024-07-24 20:03:30.869516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.020 [2024-07-24 20:03:30.869522] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc77fc0) on tqpair=0xbf4ec0 00:23:43.020 [2024-07-24 20:03:30.869526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.020 [2024-07-24 20:03:30.869531] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc78140) on tqpair=0xbf4ec0 00:23:43.020 [2024-07-24 20:03:30.869536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.020 [2024-07-24 20:03:30.869541] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc782c0) on tqpair=0xbf4ec0 00:23:43.020 [2024-07-24 20:03:30.869545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.020 [2024-07-24 20:03:30.869556] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.020 [2024-07-24 20:03:30.869560] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.020 [2024-07-24 20:03:30.869564] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf4ec0) 00:23:43.020 [2024-07-24 20:03:30.869571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.020 [2024-07-24 20:03:30.869585] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc782c0, cid 3, qid 0 00:23:43.020 [2024-07-24 20:03:30.869895] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.020 [2024-07-24 20:03:30.869902] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.020 [2024-07-24 20:03:30.869905] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.020 [2024-07-24 20:03:30.869909] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc782c0) on tqpair=0xbf4ec0 00:23:43.020 [2024-07-24 20:03:30.869919] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.020 [2024-07-24 20:03:30.869923] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.020 [2024-07-24 20:03:30.869926] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf4ec0) 00:23:43.020 [2024-07-24 20:03:30.869933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.020 [2024-07-24 20:03:30.869946] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc782c0, cid 3, qid 0 00:23:43.020 [2024-07-24 20:03:30.870196] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.020 [2024-07-24 20:03:30.870210] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.020 [2024-07-24 20:03:30.870215] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.020 [2024-07-24 20:03:30.870219] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc782c0) on tqpair=0xbf4ec0 00:23:43.020 [2024-07-24 20:03:30.870224] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:43.020 [2024-07-24 20:03:30.870229] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:43.020 [2024-07-24 20:03:30.870238] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.020 [2024-07-24 20:03:30.870242] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.020 [2024-07-24 20:03:30.870245] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf4ec0) 00:23:43.020 [2024-07-24 20:03:30.870252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.020 [2024-07-24 20:03:30.870263] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc782c0, cid 3, qid 0 00:23:43.020 [2024-07-24 20:03:30.870496] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.020 [2024-07-24 20:03:30.870503] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.020 [2024-07-24 20:03:30.870506] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.020 [2024-07-24 20:03:30.870510] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc782c0) on tqpair=0xbf4ec0 00:23:43.020 [2024-07-24 20:03:30.870520] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.020 [2024-07-24 20:03:30.870523] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.020 [2024-07-24 20:03:30.870527] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf4ec0) 00:23:43.020 [2024-07-24 20:03:30.870534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.020 [2024-07-24 20:03:30.870543] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc782c0, cid 3, qid 0 00:23:43.020 [2024-07-24 20:03:30.870786] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.020 [2024-07-24 20:03:30.870792] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.020 [2024-07-24 20:03:30.870796] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.020 [2024-07-24 20:03:30.870799] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc782c0) on tqpair=0xbf4ec0 00:23:43.020 [2024-07-24 20:03:30.870809] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.020 [2024-07-24 20:03:30.870812] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.020 [2024-07-24 20:03:30.870816] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf4ec0) 00:23:43.020 [2024-07-24 20:03:30.870822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.020 [2024-07-24 20:03:30.870832] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc782c0, cid 3, qid 0 00:23:43.020 [2024-07-24 20:03:30.871050] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.020 [2024-07-24 20:03:30.871056] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.020 [2024-07-24 20:03:30.871062] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.020 [2024-07-24 20:03:30.871066] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc782c0) on tqpair=0xbf4ec0 00:23:43.020 [2024-07-24 20:03:30.871075] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.020 [2024-07-24 20:03:30.871079] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.020 [2024-07-24 20:03:30.871082] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbf4ec0) 00:23:43.020 [2024-07-24 20:03:30.871089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.020 [2024-07-24 20:03:30.871099] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc782c0, cid 3, qid 0 00:23:43.020 [2024-07-24 20:03:30.875210] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.020 [2024-07-24 20:03:30.875218] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.020 [2024-07-24 20:03:30.875222] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.020 [2024-07-24 20:03:30.875226] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc782c0) on tqpair=0xbf4ec0 00:23:43.020 [2024-07-24 20:03:30.875233] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:23:43.020 00:23:43.020 20:03:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:43.020 [2024-07-24 20:03:30.914301] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:23:43.020 [2024-07-24 20:03:30.914349] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3768589 ] 00:23:43.020 EAL: No free 2048 kB hugepages reported on node 1 00:23:43.020 [2024-07-24 20:03:30.945774] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:43.020 [2024-07-24 20:03:30.945819] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:43.020 [2024-07-24 20:03:30.945824] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:43.020 [2024-07-24 20:03:30.945834] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:43.020 [2024-07-24 20:03:30.945842] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:43.020 [2024-07-24 20:03:30.949233] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:43.020 [2024-07-24 20:03:30.949257] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1258ec0 0 00:23:43.020 [2024-07-24 20:03:30.956208] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:43.020 [2024-07-24 20:03:30.956227] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:43.020 [2024-07-24 20:03:30.956232] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:43.020 [2024-07-24 20:03:30.956235] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:43.020 [2024-07-24 20:03:30.956269] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.020 [2024-07-24 20:03:30.956275] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.020 [2024-07-24 20:03:30.956279] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1258ec0) 00:23:43.020 [2024-07-24 20:03:30.956292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:43.020 [2024-07-24 20:03:30.956312] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dbe40, cid 0, qid 0 00:23:43.020 [2024-07-24 20:03:30.963214] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.020 [2024-07-24 20:03:30.963224] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.020 [2024-07-24 20:03:30.963228] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.020 [2024-07-24 20:03:30.963233] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dbe40) on tqpair=0x1258ec0 00:23:43.020 [2024-07-24 20:03:30.963241] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:43.020 [2024-07-24 20:03:30.963247] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:43.020 [2024-07-24 20:03:30.963252] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:43.020 [2024-07-24 20:03:30.963265] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.020 [2024-07-24 20:03:30.963269] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.020 [2024-07-24 20:03:30.963273] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1258ec0) 00:23:43.020 [2024-07-24 20:03:30.963280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.021 [2024-07-24 20:03:30.963293] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dbe40, cid 0, qid 0 00:23:43.021 [2024-07-24 20:03:30.963530] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.021 [2024-07-24 20:03:30.963538] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.021 [2024-07-24 20:03:30.963541] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.021 [2024-07-24 20:03:30.963545] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dbe40) on tqpair=0x1258ec0 00:23:43.021 [2024-07-24 20:03:30.963553] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:43.021 [2024-07-24 20:03:30.963560] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:43.021 [2024-07-24 20:03:30.963567] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.021 [2024-07-24 20:03:30.963571] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.021 [2024-07-24 20:03:30.963574] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1258ec0) 00:23:43.021 [2024-07-24 20:03:30.963582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.021 [2024-07-24 20:03:30.963594] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dbe40, cid 0, qid 0 00:23:43.021 [2024-07-24 20:03:30.963804] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.021 [2024-07-24 20:03:30.963810] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.021 [2024-07-24 20:03:30.963813] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.021 [2024-07-24 20:03:30.963817] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dbe40) on tqpair=0x1258ec0 00:23:43.021 [2024-07-24 20:03:30.963822] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:43.021 [2024-07-24 20:03:30.963830] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:43.021 [2024-07-24 20:03:30.963836] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.021 [2024-07-24 20:03:30.963840] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.021 [2024-07-24 20:03:30.963843] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1258ec0) 00:23:43.021 [2024-07-24 20:03:30.963850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.021 [2024-07-24 20:03:30.963860] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dbe40, cid 0, qid 0 00:23:43.021 [2024-07-24 20:03:30.964086] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.021 [2024-07-24 20:03:30.964092] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.021 [2024-07-24 20:03:30.964095] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.021 [2024-07-24 20:03:30.964099] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dbe40) on tqpair=0x1258ec0 00:23:43.021 [2024-07-24 20:03:30.964104] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:43.021 [2024-07-24 20:03:30.964113] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.021 [2024-07-24 20:03:30.964117] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.021 [2024-07-24 20:03:30.964120] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1258ec0) 00:23:43.021 [2024-07-24 20:03:30.964127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.021 [2024-07-24 20:03:30.964137] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dbe40, cid 0, qid 0 00:23:43.021 [2024-07-24 20:03:30.964356] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.021 [2024-07-24 20:03:30.964363] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.021 [2024-07-24 20:03:30.964366] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.021 [2024-07-24 20:03:30.964370] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dbe40) on tqpair=0x1258ec0 00:23:43.021 [2024-07-24 20:03:30.964374] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:43.021 [2024-07-24 20:03:30.964379] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:43.021 [2024-07-24 20:03:30.964386] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:43.021 [2024-07-24 20:03:30.964492] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:43.021 [2024-07-24 20:03:30.964496] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:43.021 [2024-07-24 20:03:30.964503] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.021 [2024-07-24 20:03:30.964507] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.021 [2024-07-24 20:03:30.964511] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1258ec0) 00:23:43.021 [2024-07-24 20:03:30.964518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.021 [2024-07-24 20:03:30.964528] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dbe40, cid 0, qid 0 00:23:43.021 [2024-07-24 20:03:30.964745] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.021 [2024-07-24 20:03:30.964751] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.021 [2024-07-24 20:03:30.964755] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.021 [2024-07-24 20:03:30.964758] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dbe40) on tqpair=0x1258ec0 00:23:43.021 [2024-07-24 20:03:30.964763] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:43.021 [2024-07-24 20:03:30.964772] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.021 [2024-07-24 20:03:30.964776] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.021 [2024-07-24 20:03:30.964780] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1258ec0) 00:23:43.021 [2024-07-24 20:03:30.964786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.021 [2024-07-24 20:03:30.964799] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dbe40, cid 0, qid 0 00:23:43.021 [2024-07-24 20:03:30.965028] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.021 [2024-07-24 20:03:30.965034] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.021 [2024-07-24 20:03:30.965038] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.021 [2024-07-24 20:03:30.965041] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dbe40) on tqpair=0x1258ec0 00:23:43.021 [2024-07-24 20:03:30.965046] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:43.021 [2024-07-24 20:03:30.965050] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:43.021 [2024-07-24 20:03:30.965058] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:43.021 [2024-07-24 20:03:30.965065] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:43.021 [2024-07-24 20:03:30.965074] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.021 [2024-07-24 20:03:30.965078] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1258ec0) 00:23:43.021 [2024-07-24 20:03:30.965085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.021 [2024-07-24 20:03:30.965095] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dbe40, cid 0, qid 0 00:23:43.021 [2024-07-24 20:03:30.965331] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:43.021 [2024-07-24 20:03:30.965337] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:43.021 [2024-07-24 20:03:30.965341] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:43.021 [2024-07-24 20:03:30.965345] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1258ec0): datao=0, datal=4096, cccid=0 00:23:43.021 [2024-07-24 20:03:30.965349] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12dbe40) on tqpair(0x1258ec0): expected_datao=0, payload_size=4096 00:23:43.021 [2024-07-24 20:03:30.965354] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.021 [2024-07-24 20:03:30.965372] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:43.021 [2024-07-24 20:03:30.965376] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:43.286 [2024-07-24 20:03:31.007430] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.286 [2024-07-24 20:03:31.007445] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.286 [2024-07-24 20:03:31.007449] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.286 [2024-07-24 20:03:31.007453] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dbe40) on tqpair=0x1258ec0 00:23:43.286 [2024-07-24 20:03:31.007462] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:43.286 [2024-07-24 20:03:31.007467] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:43.286 [2024-07-24 20:03:31.007471] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:43.286 [2024-07-24 20:03:31.007475] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:43.286 [2024-07-24 20:03:31.007480] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:43.286 [2024-07-24 20:03:31.007484] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:43.286 [2024-07-24 20:03:31.007493] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:43.286 [2024-07-24 20:03:31.007504] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.286 [2024-07-24 20:03:31.007510] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.286 [2024-07-24 20:03:31.007514] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1258ec0) 00:23:43.286 [2024-07-24 20:03:31.007522] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:43.286 [2024-07-24 20:03:31.007536] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dbe40, cid 0, qid 0 00:23:43.286 [2024-07-24 20:03:31.007703] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.286 [2024-07-24 20:03:31.007709] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.286 [2024-07-24 20:03:31.007713] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.286 [2024-07-24 20:03:31.007716] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dbe40) on tqpair=0x1258ec0 00:23:43.286 [2024-07-24 20:03:31.007723] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.286 [2024-07-24 20:03:31.007727] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.286 [2024-07-24 20:03:31.007730] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1258ec0) 00:23:43.286 [2024-07-24 20:03:31.007737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.286 [2024-07-24 20:03:31.007743] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.286 [2024-07-24 20:03:31.007747] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.286 [2024-07-24 20:03:31.007750] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1258ec0) 00:23:43.286 [2024-07-24 20:03:31.007756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.286 [2024-07-24 20:03:31.007762] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.286 [2024-07-24 20:03:31.007766] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.286 [2024-07-24 20:03:31.007769] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1258ec0) 00:23:43.286 [2024-07-24 20:03:31.007775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.286 [2024-07-24 20:03:31.007781] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.286 [2024-07-24 20:03:31.007784] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.286 [2024-07-24 20:03:31.007788] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1258ec0) 00:23:43.286 [2024-07-24 20:03:31.007794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.286 [2024-07-24 20:03:31.007798] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:43.286 [2024-07-24 20:03:31.007808] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:43.286 [2024-07-24 20:03:31.007814] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.286 [2024-07-24 20:03:31.007818] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1258ec0) 00:23:43.286 [2024-07-24 20:03:31.007825] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.286 [2024-07-24 20:03:31.007837] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dbe40, cid 0, qid 0 00:23:43.286 [2024-07-24 20:03:31.007842] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dbfc0, cid 1, qid 0 00:23:43.286 [2024-07-24 20:03:31.007847] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dc140, cid 2, qid 0 00:23:43.286 [2024-07-24 20:03:31.007852] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dc2c0, cid 3, qid 0 00:23:43.286 [2024-07-24 20:03:31.007856] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dc440, cid 4, qid 0 00:23:43.286 [2024-07-24 20:03:31.008096] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.286 [2024-07-24 20:03:31.008102] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.286 [2024-07-24 20:03:31.008105] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.286 [2024-07-24 20:03:31.008109] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dc440) on tqpair=0x1258ec0 00:23:43.286 [2024-07-24 20:03:31.008114] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:43.286 [2024-07-24 20:03:31.008119] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:43.286 [2024-07-24 20:03:31.008128] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:43.286 [2024-07-24 20:03:31.008134] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:43.286 [2024-07-24 20:03:31.008140] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.286 [2024-07-24 20:03:31.008144] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.286 [2024-07-24 20:03:31.008147] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1258ec0) 00:23:43.286 [2024-07-24 20:03:31.008154] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:43.286 [2024-07-24 20:03:31.008164] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dc440, cid 4, qid 0 00:23:43.287 [2024-07-24 20:03:31.012208] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.287 [2024-07-24 20:03:31.012216] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.287 [2024-07-24 20:03:31.012220] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.287 [2024-07-24 20:03:31.012223] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dc440) on tqpair=0x1258ec0 00:23:43.287 [2024-07-24 20:03:31.012288] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:43.287 [2024-07-24 20:03:31.012298] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:43.287 [2024-07-24 20:03:31.012305] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.287 [2024-07-24 20:03:31.012309] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1258ec0) 00:23:43.287 [2024-07-24 20:03:31.012315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.287 [2024-07-24 20:03:31.012326] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dc440, cid 4, qid 0 00:23:43.287 [2024-07-24 20:03:31.012562] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:43.287 [2024-07-24 20:03:31.012569] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:43.287 [2024-07-24 20:03:31.012573] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:43.287 [2024-07-24 20:03:31.012576] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1258ec0): datao=0, datal=4096, cccid=4 00:23:43.287 [2024-07-24 20:03:31.012581] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12dc440) on tqpair(0x1258ec0): expected_datao=0, payload_size=4096 00:23:43.287 [2024-07-24 20:03:31.012585] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.287 [2024-07-24 20:03:31.012592] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:43.287 [2024-07-24 20:03:31.012596] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:43.287 [2024-07-24 20:03:31.012769] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.287 [2024-07-24 20:03:31.012775] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.287 [2024-07-24 20:03:31.012779] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.287 [2024-07-24 20:03:31.012785] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dc440) on tqpair=0x1258ec0 00:23:43.287 [2024-07-24 20:03:31.012795] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:43.287 [2024-07-24 20:03:31.012810] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:43.287 [2024-07-24 20:03:31.012820] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:43.287 [2024-07-24 20:03:31.012827] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.287 [2024-07-24 20:03:31.012830] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1258ec0) 00:23:43.287 [2024-07-24 20:03:31.012837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.287 [2024-07-24 20:03:31.012848] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dc440, cid 4, qid 0 00:23:43.287 [2024-07-24 20:03:31.013074] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:43.287 [2024-07-24 20:03:31.013080] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:43.287 [2024-07-24 20:03:31.013084] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:43.287 [2024-07-24 20:03:31.013087] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1258ec0): datao=0, datal=4096, cccid=4 00:23:43.287 [2024-07-24 20:03:31.013091] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12dc440) on tqpair(0x1258ec0): expected_datao=0, payload_size=4096 00:23:43.287 [2024-07-24 20:03:31.013096] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.287 [2024-07-24 20:03:31.013102] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:43.287 [2024-07-24 20:03:31.013106] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:43.287 [2024-07-24 20:03:31.013272] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.287 [2024-07-24 20:03:31.013278] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.287 [2024-07-24 20:03:31.013282] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.287 [2024-07-24 20:03:31.013286] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dc440) on tqpair=0x1258ec0 00:23:43.287 [2024-07-24 20:03:31.013298] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:43.287 [2024-07-24 20:03:31.013307] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:43.287 [2024-07-24 20:03:31.013314] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.287 [2024-07-24 20:03:31.013318] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1258ec0) 00:23:43.287 [2024-07-24 20:03:31.013324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.287 [2024-07-24 20:03:31.013336] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dc440, cid 4, qid 0 00:23:43.287 [2024-07-24 20:03:31.013569] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:43.287 [2024-07-24 20:03:31.013575] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:43.287 [2024-07-24 20:03:31.013578] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:43.287 [2024-07-24 20:03:31.013582] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1258ec0): datao=0, datal=4096, cccid=4 00:23:43.287 [2024-07-24 20:03:31.013586] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12dc440) on tqpair(0x1258ec0): expected_datao=0, payload_size=4096 00:23:43.287 [2024-07-24 20:03:31.013590] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.287 [2024-07-24 20:03:31.013597] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:43.287 [2024-07-24 20:03:31.013603] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:43.287 [2024-07-24 20:03:31.013779] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.287 [2024-07-24 20:03:31.013785] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.287 [2024-07-24 20:03:31.013789] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.287 [2024-07-24 20:03:31.013792] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dc440) on tqpair=0x1258ec0 00:23:43.287 [2024-07-24 20:03:31.013799] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:43.287 [2024-07-24 20:03:31.013807] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:43.287 [2024-07-24 20:03:31.013815] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:43.287 [2024-07-24 20:03:31.013823] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:43.287 [2024-07-24 20:03:31.013828] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:43.287 [2024-07-24 20:03:31.013833] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:43.288 [2024-07-24 20:03:31.013838] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:43.288 [2024-07-24 20:03:31.013842] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:43.288 [2024-07-24 20:03:31.013847] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:43.288 [2024-07-24 20:03:31.013861] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.288 [2024-07-24 20:03:31.013864] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1258ec0) 00:23:43.288 [2024-07-24 20:03:31.013871] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.288 [2024-07-24 20:03:31.013878] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.288 [2024-07-24 20:03:31.013882] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.288 [2024-07-24 20:03:31.013885] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1258ec0) 00:23:43.288 [2024-07-24 20:03:31.013891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.288 [2024-07-24 20:03:31.013905] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dc440, cid 4, qid 0 00:23:43.288 [2024-07-24 20:03:31.013910] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dc5c0, cid 5, qid 0 00:23:43.288 [2024-07-24 20:03:31.014106] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.288 [2024-07-24 20:03:31.014112] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.288 [2024-07-24 20:03:31.014116] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.288 [2024-07-24 20:03:31.014120] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dc440) on tqpair=0x1258ec0 00:23:43.288 [2024-07-24 20:03:31.014126] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.288 [2024-07-24 20:03:31.014132] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.288 [2024-07-24 20:03:31.014135] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.288 [2024-07-24 20:03:31.014139] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dc5c0) on tqpair=0x1258ec0 00:23:43.288 [2024-07-24 20:03:31.014148] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.288 [2024-07-24 20:03:31.014152] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1258ec0) 00:23:43.288 [2024-07-24 20:03:31.014160] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.288 [2024-07-24 20:03:31.014171] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dc5c0, cid 5, qid 0 00:23:43.288 [2024-07-24 20:03:31.014402] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.288 [2024-07-24 20:03:31.014409] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.288 [2024-07-24 20:03:31.014412] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.288 [2024-07-24 20:03:31.014416] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dc5c0) on tqpair=0x1258ec0 00:23:43.288 [2024-07-24 20:03:31.014425] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.288 [2024-07-24 20:03:31.014428] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1258ec0) 00:23:43.288 [2024-07-24 20:03:31.014435] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.288 [2024-07-24 20:03:31.014445] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dc5c0, cid 5, qid 0 00:23:43.288 [2024-07-24 20:03:31.014682] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.288 [2024-07-24 20:03:31.014689] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.288 [2024-07-24 20:03:31.014692] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.288 [2024-07-24 20:03:31.014696] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dc5c0) on tqpair=0x1258ec0 00:23:43.288 [2024-07-24 20:03:31.014705] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.288 [2024-07-24 20:03:31.014709] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1258ec0) 00:23:43.288 [2024-07-24 20:03:31.014715] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.288 [2024-07-24 20:03:31.014725] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dc5c0, cid 5, qid 0 00:23:43.288 [2024-07-24 20:03:31.014955] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.288 [2024-07-24 20:03:31.014961] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.288 [2024-07-24 20:03:31.014964] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.288 [2024-07-24 20:03:31.014968] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dc5c0) on tqpair=0x1258ec0 00:23:43.288 [2024-07-24 20:03:31.014982] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.288 [2024-07-24 20:03:31.014986] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1258ec0) 00:23:43.288 [2024-07-24 20:03:31.014992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.288 [2024-07-24 20:03:31.015000] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.288 [2024-07-24 20:03:31.015003] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1258ec0) 00:23:43.288 [2024-07-24 20:03:31.015009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.288 [2024-07-24 20:03:31.015017] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.288 [2024-07-24 20:03:31.015020] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1258ec0) 00:23:43.288 [2024-07-24 20:03:31.015026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.288 [2024-07-24 20:03:31.015033] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.288 [2024-07-24 20:03:31.015037] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1258ec0) 00:23:43.288 [2024-07-24 20:03:31.015045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.288 [2024-07-24 20:03:31.015057] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dc5c0, cid 5, qid 0 00:23:43.288 [2024-07-24 20:03:31.015062] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dc440, cid 4, qid 0 00:23:43.288 [2024-07-24 20:03:31.015067] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dc740, cid 6, qid 0 00:23:43.288 [2024-07-24 20:03:31.015071] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dc8c0, cid 7, qid 0 00:23:43.288 [2024-07-24 20:03:31.015335] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:43.288 [2024-07-24 20:03:31.015342] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:43.288 [2024-07-24 20:03:31.015345] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:43.288 [2024-07-24 20:03:31.015349] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1258ec0): datao=0, datal=8192, cccid=5 00:23:43.288 [2024-07-24 20:03:31.015353] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12dc5c0) on tqpair(0x1258ec0): expected_datao=0, payload_size=8192 00:23:43.288 [2024-07-24 20:03:31.015358] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.288 [2024-07-24 20:03:31.015457] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:43.288 [2024-07-24 20:03:31.015461] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:43.288 [2024-07-24 20:03:31.015467] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:43.288 [2024-07-24 20:03:31.015473] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:43.288 [2024-07-24 20:03:31.015476] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:43.288 [2024-07-24 20:03:31.015480] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1258ec0): datao=0, datal=512, cccid=4 00:23:43.288 [2024-07-24 20:03:31.015484] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12dc440) on tqpair(0x1258ec0): expected_datao=0, payload_size=512 00:23:43.288 [2024-07-24 20:03:31.015488] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.288 [2024-07-24 20:03:31.015495] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:43.288 [2024-07-24 20:03:31.015498] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:43.288 [2024-07-24 20:03:31.015504] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:43.288 [2024-07-24 20:03:31.015509] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:43.288 [2024-07-24 20:03:31.015513] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:43.288 [2024-07-24 20:03:31.015516] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1258ec0): datao=0, datal=512, cccid=6 00:23:43.288 [2024-07-24 20:03:31.015520] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12dc740) on tqpair(0x1258ec0): expected_datao=0, payload_size=512 00:23:43.288 [2024-07-24 20:03:31.015524] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.288 [2024-07-24 20:03:31.015531] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:43.288 [2024-07-24 20:03:31.015534] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:43.289 [2024-07-24 20:03:31.015540] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:43.289 [2024-07-24 20:03:31.015545] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:43.289 [2024-07-24 20:03:31.015549] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:43.289 [2024-07-24 20:03:31.015552] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1258ec0): datao=0, datal=4096, cccid=7 00:23:43.289 [2024-07-24 20:03:31.015556] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12dc8c0) on tqpair(0x1258ec0): expected_datao=0, payload_size=4096 00:23:43.289 [2024-07-24 20:03:31.015561] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.289 [2024-07-24 20:03:31.015572] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:43.289 [2024-07-24 20:03:31.015575] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:43.289 [2024-07-24 20:03:31.015758] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.289 [2024-07-24 20:03:31.015763] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.289 [2024-07-24 20:03:31.015767] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.289 [2024-07-24 20:03:31.015771] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dc5c0) on tqpair=0x1258ec0 00:23:43.289 [2024-07-24 20:03:31.015784] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.289 [2024-07-24 20:03:31.015790] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.289 [2024-07-24 20:03:31.015793] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.289 [2024-07-24 20:03:31.015797] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dc440) on tqpair=0x1258ec0 00:23:43.289 [2024-07-24 20:03:31.015806] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.289 [2024-07-24 20:03:31.015812] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.289 [2024-07-24 20:03:31.015816] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.289 [2024-07-24 20:03:31.015819] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dc740) on tqpair=0x1258ec0 00:23:43.289 [2024-07-24 20:03:31.015826] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.289 [2024-07-24 20:03:31.015832] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.289 [2024-07-24 20:03:31.015835] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.289 [2024-07-24 20:03:31.015839] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dc8c0) on tqpair=0x1258ec0 00:23:43.289 ===================================================== 00:23:43.289 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:43.289 ===================================================== 00:23:43.289 Controller Capabilities/Features 00:23:43.289 ================================ 00:23:43.289 Vendor ID: 8086 00:23:43.289 Subsystem Vendor ID: 8086 00:23:43.289 Serial Number: SPDK00000000000001 00:23:43.289 Model Number: SPDK bdev Controller 00:23:43.289 Firmware Version: 24.09 00:23:43.289 Recommended Arb Burst: 6 00:23:43.289 IEEE OUI Identifier: e4 d2 5c 00:23:43.289 Multi-path I/O 00:23:43.289 May have multiple subsystem ports: Yes 00:23:43.289 May have multiple controllers: Yes 00:23:43.289 Associated with SR-IOV VF: No 00:23:43.289 Max Data Transfer Size: 131072 00:23:43.289 Max Number of Namespaces: 32 00:23:43.289 Max Number of I/O Queues: 127 00:23:43.289 NVMe Specification Version (VS): 1.3 00:23:43.289 NVMe Specification Version (Identify): 1.3 00:23:43.289 Maximum Queue Entries: 128 00:23:43.289 Contiguous Queues Required: Yes 00:23:43.289 Arbitration Mechanisms Supported 00:23:43.289 Weighted Round Robin: Not Supported 00:23:43.289 Vendor Specific: Not Supported 00:23:43.289 Reset Timeout: 15000 ms 00:23:43.289 Doorbell Stride: 4 bytes 00:23:43.289 NVM Subsystem Reset: Not Supported 00:23:43.289 Command Sets Supported 00:23:43.289 NVM Command Set: Supported 00:23:43.289 Boot Partition: Not Supported 00:23:43.289 Memory Page Size Minimum: 4096 bytes 00:23:43.289 Memory Page Size Maximum: 4096 bytes 00:23:43.289 Persistent Memory Region: Not Supported 00:23:43.289 Optional Asynchronous Events Supported 00:23:43.289 Namespace Attribute Notices: Supported 00:23:43.289 Firmware Activation Notices: Not Supported 00:23:43.289 ANA Change Notices: Not Supported 00:23:43.289 PLE Aggregate Log Change Notices: Not Supported 00:23:43.289 LBA Status Info Alert Notices: Not Supported 00:23:43.289 EGE Aggregate Log Change Notices: Not Supported 00:23:43.289 Normal NVM Subsystem Shutdown event: Not Supported 00:23:43.289 Zone Descriptor Change Notices: Not Supported 00:23:43.289 Discovery Log Change Notices: Not Supported 00:23:43.289 Controller Attributes 00:23:43.289 128-bit Host Identifier: Supported 00:23:43.289 Non-Operational Permissive Mode: Not Supported 00:23:43.289 NVM Sets: Not Supported 00:23:43.289 Read Recovery Levels: Not Supported 00:23:43.289 Endurance Groups: Not Supported 00:23:43.289 Predictable Latency Mode: Not Supported 00:23:43.289 Traffic Based Keep ALive: Not Supported 00:23:43.289 Namespace Granularity: Not Supported 00:23:43.289 SQ Associations: Not Supported 00:23:43.289 UUID List: Not Supported 00:23:43.289 Multi-Domain Subsystem: Not Supported 00:23:43.289 Fixed Capacity Management: Not Supported 00:23:43.289 Variable Capacity Management: Not Supported 00:23:43.289 Delete Endurance Group: Not Supported 00:23:43.289 Delete NVM Set: Not Supported 00:23:43.289 Extended LBA Formats Supported: Not Supported 00:23:43.289 Flexible Data Placement Supported: Not Supported 00:23:43.289 00:23:43.289 Controller Memory Buffer Support 00:23:43.289 ================================ 00:23:43.289 Supported: No 00:23:43.289 00:23:43.289 Persistent Memory Region Support 00:23:43.289 ================================ 00:23:43.289 Supported: No 00:23:43.289 00:23:43.289 Admin Command Set Attributes 00:23:43.289 ============================ 00:23:43.289 Security Send/Receive: Not Supported 00:23:43.289 Format NVM: Not Supported 00:23:43.289 Firmware Activate/Download: Not Supported 00:23:43.289 Namespace Management: Not Supported 00:23:43.289 Device Self-Test: Not Supported 00:23:43.289 Directives: Not Supported 00:23:43.289 NVMe-MI: Not Supported 00:23:43.289 Virtualization Management: Not Supported 00:23:43.289 Doorbell Buffer Config: Not Supported 00:23:43.289 Get LBA Status Capability: Not Supported 00:23:43.289 Command & Feature Lockdown Capability: Not Supported 00:23:43.289 Abort Command Limit: 4 00:23:43.289 Async Event Request Limit: 4 00:23:43.289 Number of Firmware Slots: N/A 00:23:43.289 Firmware Slot 1 Read-Only: N/A 00:23:43.289 Firmware Activation Without Reset: N/A 00:23:43.289 Multiple Update Detection Support: N/A 00:23:43.289 Firmware Update Granularity: No Information Provided 00:23:43.289 Per-Namespace SMART Log: No 00:23:43.289 Asymmetric Namespace Access Log Page: Not Supported 00:23:43.289 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:43.289 Command Effects Log Page: Supported 00:23:43.289 Get Log Page Extended Data: Supported 00:23:43.289 Telemetry Log Pages: Not Supported 00:23:43.289 Persistent Event Log Pages: Not Supported 00:23:43.289 Supported Log Pages Log Page: May Support 00:23:43.289 Commands Supported & Effects Log Page: Not Supported 00:23:43.289 Feature Identifiers & Effects Log Page:May Support 00:23:43.289 NVMe-MI Commands & Effects Log Page: May Support 00:23:43.289 Data Area 4 for Telemetry Log: Not Supported 00:23:43.289 Error Log Page Entries Supported: 128 00:23:43.289 Keep Alive: Supported 00:23:43.289 Keep Alive Granularity: 10000 ms 00:23:43.289 00:23:43.289 NVM Command Set Attributes 00:23:43.289 ========================== 00:23:43.289 Submission Queue Entry Size 00:23:43.289 Max: 64 00:23:43.289 Min: 64 00:23:43.289 Completion Queue Entry Size 00:23:43.290 Max: 16 00:23:43.290 Min: 16 00:23:43.290 Number of Namespaces: 32 00:23:43.290 Compare Command: Supported 00:23:43.290 Write Uncorrectable Command: Not Supported 00:23:43.290 Dataset Management Command: Supported 00:23:43.290 Write Zeroes Command: Supported 00:23:43.290 Set Features Save Field: Not Supported 00:23:43.290 Reservations: Supported 00:23:43.290 Timestamp: Not Supported 00:23:43.290 Copy: Supported 00:23:43.290 Volatile Write Cache: Present 00:23:43.290 Atomic Write Unit (Normal): 1 00:23:43.290 Atomic Write Unit (PFail): 1 00:23:43.290 Atomic Compare & Write Unit: 1 00:23:43.290 Fused Compare & Write: Supported 00:23:43.290 Scatter-Gather List 00:23:43.290 SGL Command Set: Supported 00:23:43.290 SGL Keyed: Supported 00:23:43.290 SGL Bit Bucket Descriptor: Not Supported 00:23:43.290 SGL Metadata Pointer: Not Supported 00:23:43.290 Oversized SGL: Not Supported 00:23:43.290 SGL Metadata Address: Not Supported 00:23:43.290 SGL Offset: Supported 00:23:43.290 Transport SGL Data Block: Not Supported 00:23:43.290 Replay Protected Memory Block: Not Supported 00:23:43.290 00:23:43.290 Firmware Slot Information 00:23:43.290 ========================= 00:23:43.290 Active slot: 1 00:23:43.290 Slot 1 Firmware Revision: 24.09 00:23:43.290 00:23:43.290 00:23:43.290 Commands Supported and Effects 00:23:43.290 ============================== 00:23:43.290 Admin Commands 00:23:43.290 -------------- 00:23:43.290 Get Log Page (02h): Supported 00:23:43.290 Identify (06h): Supported 00:23:43.290 Abort (08h): Supported 00:23:43.290 Set Features (09h): Supported 00:23:43.290 Get Features (0Ah): Supported 00:23:43.290 Asynchronous Event Request (0Ch): Supported 00:23:43.290 Keep Alive (18h): Supported 00:23:43.290 I/O Commands 00:23:43.290 ------------ 00:23:43.290 Flush (00h): Supported LBA-Change 00:23:43.290 Write (01h): Supported LBA-Change 00:23:43.290 Read (02h): Supported 00:23:43.290 Compare (05h): Supported 00:23:43.290 Write Zeroes (08h): Supported LBA-Change 00:23:43.290 Dataset Management (09h): Supported LBA-Change 00:23:43.290 Copy (19h): Supported LBA-Change 00:23:43.290 00:23:43.290 Error Log 00:23:43.290 ========= 00:23:43.290 00:23:43.290 Arbitration 00:23:43.290 =========== 00:23:43.290 Arbitration Burst: 1 00:23:43.290 00:23:43.290 Power Management 00:23:43.290 ================ 00:23:43.290 Number of Power States: 1 00:23:43.290 Current Power State: Power State #0 00:23:43.290 Power State #0: 00:23:43.290 Max Power: 0.00 W 00:23:43.290 Non-Operational State: Operational 00:23:43.290 Entry Latency: Not Reported 00:23:43.290 Exit Latency: Not Reported 00:23:43.290 Relative Read Throughput: 0 00:23:43.290 Relative Read Latency: 0 00:23:43.290 Relative Write Throughput: 0 00:23:43.290 Relative Write Latency: 0 00:23:43.290 Idle Power: Not Reported 00:23:43.290 Active Power: Not Reported 00:23:43.290 Non-Operational Permissive Mode: Not Supported 00:23:43.290 00:23:43.290 Health Information 00:23:43.290 ================== 00:23:43.290 Critical Warnings: 00:23:43.290 Available Spare Space: OK 00:23:43.290 Temperature: OK 00:23:43.290 Device Reliability: OK 00:23:43.290 Read Only: No 00:23:43.290 Volatile Memory Backup: OK 00:23:43.290 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:43.290 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:43.290 Available Spare: 0% 00:23:43.290 Available Spare Threshold: 0% 00:23:43.290 Life Percentage Used:[2024-07-24 20:03:31.015937] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.290 [2024-07-24 20:03:31.015942] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1258ec0) 00:23:43.290 [2024-07-24 20:03:31.015949] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.290 [2024-07-24 20:03:31.015961] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dc8c0, cid 7, qid 0 00:23:43.290 [2024-07-24 20:03:31.020208] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.290 [2024-07-24 20:03:31.020216] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.290 [2024-07-24 20:03:31.020220] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.290 [2024-07-24 20:03:31.020224] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dc8c0) on tqpair=0x1258ec0 00:23:43.290 [2024-07-24 20:03:31.020253] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:43.290 [2024-07-24 20:03:31.020262] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dbe40) on tqpair=0x1258ec0 00:23:43.290 [2024-07-24 20:03:31.020268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.290 [2024-07-24 20:03:31.020274] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dbfc0) on tqpair=0x1258ec0 00:23:43.290 [2024-07-24 20:03:31.020278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.290 [2024-07-24 20:03:31.020283] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dc140) on tqpair=0x1258ec0 00:23:43.290 [2024-07-24 20:03:31.020288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.290 [2024-07-24 20:03:31.020293] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dc2c0) on tqpair=0x1258ec0 00:23:43.290 [2024-07-24 20:03:31.020297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.290 [2024-07-24 20:03:31.020305] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.290 [2024-07-24 20:03:31.020309] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.290 [2024-07-24 20:03:31.020312] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1258ec0) 00:23:43.290 [2024-07-24 20:03:31.020321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.290 [2024-07-24 20:03:31.020334] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dc2c0, cid 3, qid 0 00:23:43.290 [2024-07-24 20:03:31.020594] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.290 [2024-07-24 20:03:31.020601] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.290 [2024-07-24 20:03:31.020605] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.290 [2024-07-24 20:03:31.020609] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dc2c0) on tqpair=0x1258ec0 00:23:43.290 [2024-07-24 20:03:31.020616] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.290 [2024-07-24 20:03:31.020619] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.290 [2024-07-24 20:03:31.020623] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1258ec0) 00:23:43.290 [2024-07-24 20:03:31.020630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.290 [2024-07-24 20:03:31.020643] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dc2c0, cid 3, qid 0 00:23:43.290 [2024-07-24 20:03:31.020891] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.290 [2024-07-24 20:03:31.020897] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.290 [2024-07-24 20:03:31.020901] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.290 [2024-07-24 20:03:31.020904] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dc2c0) on tqpair=0x1258ec0 00:23:43.290 [2024-07-24 20:03:31.020909] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:43.290 [2024-07-24 20:03:31.020914] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:43.290 [2024-07-24 20:03:31.020923] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.290 [2024-07-24 20:03:31.020926] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.290 [2024-07-24 20:03:31.020930] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1258ec0) 00:23:43.290 [2024-07-24 20:03:31.020937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.290 [2024-07-24 20:03:31.020946] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dc2c0, cid 3, qid 0 00:23:43.290 [2024-07-24 20:03:31.021178] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.290 [2024-07-24 20:03:31.021184] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.290 [2024-07-24 20:03:31.021187] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.290 [2024-07-24 20:03:31.021191] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dc2c0) on tqpair=0x1258ec0 00:23:43.290 [2024-07-24 20:03:31.021216] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.290 [2024-07-24 20:03:31.021220] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.290 [2024-07-24 20:03:31.021224] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1258ec0) 00:23:43.290 [2024-07-24 20:03:31.021231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.290 [2024-07-24 20:03:31.021241] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dc2c0, cid 3, qid 0 00:23:43.290 [2024-07-24 20:03:31.021495] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.290 [2024-07-24 20:03:31.021501] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.291 [2024-07-24 20:03:31.021505] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.291 [2024-07-24 20:03:31.021509] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dc2c0) on tqpair=0x1258ec0 00:23:43.291 [2024-07-24 20:03:31.021518] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.291 [2024-07-24 20:03:31.021525] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.291 [2024-07-24 20:03:31.021528] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1258ec0) 00:23:43.291 [2024-07-24 20:03:31.021535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.291 [2024-07-24 20:03:31.021545] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dc2c0, cid 3, qid 0 00:23:43.291 [2024-07-24 20:03:31.021799] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.291 [2024-07-24 20:03:31.021805] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.291 [2024-07-24 20:03:31.021809] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.291 [2024-07-24 20:03:31.021812] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dc2c0) on tqpair=0x1258ec0 00:23:43.291 [2024-07-24 20:03:31.021822] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.291 [2024-07-24 20:03:31.021826] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.291 [2024-07-24 20:03:31.021829] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1258ec0) 00:23:43.291 [2024-07-24 20:03:31.021836] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.291 [2024-07-24 20:03:31.021846] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dc2c0, cid 3, qid 0 00:23:43.291 [2024-07-24 20:03:31.022099] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.291 [2024-07-24 20:03:31.022105] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.291 [2024-07-24 20:03:31.022109] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.291 [2024-07-24 20:03:31.022113] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dc2c0) on tqpair=0x1258ec0 00:23:43.291 [2024-07-24 20:03:31.022122] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.291 [2024-07-24 20:03:31.022126] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.291 [2024-07-24 20:03:31.022130] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1258ec0) 00:23:43.291 [2024-07-24 20:03:31.022136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.291 [2024-07-24 20:03:31.022146] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dc2c0, cid 3, qid 0 00:23:43.291 [2024-07-24 20:03:31.022351] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.291 [2024-07-24 20:03:31.022358] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.291 [2024-07-24 20:03:31.022362] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.291 [2024-07-24 20:03:31.022366] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dc2c0) on tqpair=0x1258ec0 00:23:43.291 [2024-07-24 20:03:31.022375] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.291 [2024-07-24 20:03:31.022379] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.291 [2024-07-24 20:03:31.022383] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1258ec0) 00:23:43.291 [2024-07-24 20:03:31.022389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.291 [2024-07-24 20:03:31.022399] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dc2c0, cid 3, qid 0 00:23:43.291 [2024-07-24 20:03:31.022620] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.291 [2024-07-24 20:03:31.022626] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.291 [2024-07-24 20:03:31.022630] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.291 [2024-07-24 20:03:31.022634] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dc2c0) on tqpair=0x1258ec0 00:23:43.291 [2024-07-24 20:03:31.022643] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.291 [2024-07-24 20:03:31.022647] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.291 [2024-07-24 20:03:31.022653] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1258ec0) 00:23:43.291 [2024-07-24 20:03:31.022660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.291 [2024-07-24 20:03:31.022670] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dc2c0, cid 3, qid 0 00:23:43.291 [2024-07-24 20:03:31.022872] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.291 [2024-07-24 20:03:31.022878] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.291 [2024-07-24 20:03:31.022882] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.291 [2024-07-24 20:03:31.022885] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dc2c0) on tqpair=0x1258ec0 00:23:43.291 [2024-07-24 20:03:31.022895] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.291 [2024-07-24 20:03:31.022899] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.291 [2024-07-24 20:03:31.022902] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1258ec0) 00:23:43.291 [2024-07-24 20:03:31.022909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.291 [2024-07-24 20:03:31.022919] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dc2c0, cid 3, qid 0 00:23:43.291 [2024-07-24 20:03:31.023174] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.291 [2024-07-24 20:03:31.023181] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.291 [2024-07-24 20:03:31.023185] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.291 [2024-07-24 20:03:31.023189] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dc2c0) on tqpair=0x1258ec0 00:23:43.291 [2024-07-24 20:03:31.023198] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.291 [2024-07-24 20:03:31.023207] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.291 [2024-07-24 20:03:31.023211] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1258ec0) 00:23:43.291 [2024-07-24 20:03:31.023218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.291 [2024-07-24 20:03:31.023228] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dc2c0, cid 3, qid 0 00:23:43.291 [2024-07-24 20:03:31.023445] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.291 [2024-07-24 20:03:31.023451] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.291 [2024-07-24 20:03:31.023455] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.291 [2024-07-24 20:03:31.023458] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dc2c0) on tqpair=0x1258ec0 00:23:43.291 [2024-07-24 20:03:31.023468] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.291 [2024-07-24 20:03:31.023471] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.291 [2024-07-24 20:03:31.023475] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1258ec0) 00:23:43.291 [2024-07-24 20:03:31.023482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.291 [2024-07-24 20:03:31.023492] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dc2c0, cid 3, qid 0 00:23:43.291 [2024-07-24 20:03:31.023678] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.291 [2024-07-24 20:03:31.023684] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.291 [2024-07-24 20:03:31.023687] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.291 [2024-07-24 20:03:31.023691] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dc2c0) on tqpair=0x1258ec0 00:23:43.291 [2024-07-24 20:03:31.023701] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.291 [2024-07-24 20:03:31.023705] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.291 [2024-07-24 20:03:31.023708] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1258ec0) 00:23:43.291 [2024-07-24 20:03:31.023717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.291 [2024-07-24 20:03:31.023727] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dc2c0, cid 3, qid 0 00:23:43.291 [2024-07-24 20:03:31.023980] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.291 [2024-07-24 20:03:31.023986] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.292 [2024-07-24 20:03:31.023990] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.292 [2024-07-24 20:03:31.023993] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dc2c0) on tqpair=0x1258ec0 00:23:43.292 [2024-07-24 20:03:31.024003] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.292 [2024-07-24 20:03:31.024007] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.292 [2024-07-24 20:03:31.024010] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1258ec0) 00:23:43.292 [2024-07-24 20:03:31.024017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.292 [2024-07-24 20:03:31.024026] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dc2c0, cid 3, qid 0 00:23:43.292 [2024-07-24 20:03:31.028213] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.292 [2024-07-24 20:03:31.028222] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.292 [2024-07-24 20:03:31.028225] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.292 [2024-07-24 20:03:31.028229] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dc2c0) on tqpair=0x1258ec0 00:23:43.292 [2024-07-24 20:03:31.028239] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:43.292 [2024-07-24 20:03:31.028243] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:43.292 [2024-07-24 20:03:31.028247] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1258ec0) 00:23:43.292 [2024-07-24 20:03:31.028253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:43.292 [2024-07-24 20:03:31.028265] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dc2c0, cid 3, qid 0 00:23:43.292 [2024-07-24 20:03:31.028492] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:43.292 [2024-07-24 20:03:31.028499] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:43.292 [2024-07-24 20:03:31.028502] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:43.292 [2024-07-24 20:03:31.028506] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dc2c0) on tqpair=0x1258ec0 00:23:43.292 [2024-07-24 20:03:31.028514] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:23:43.292 0% 00:23:43.292 Data Units Read: 0 00:23:43.292 Data Units Written: 0 00:23:43.292 Host Read Commands: 0 00:23:43.292 Host Write Commands: 0 00:23:43.292 Controller Busy Time: 0 minutes 00:23:43.292 Power Cycles: 0 00:23:43.292 Power On Hours: 0 hours 00:23:43.292 Unsafe Shutdowns: 0 00:23:43.292 Unrecoverable Media Errors: 0 00:23:43.292 Lifetime Error Log Entries: 0 00:23:43.292 Warning Temperature Time: 0 minutes 00:23:43.292 Critical Temperature Time: 0 minutes 00:23:43.292 00:23:43.292 Number of Queues 00:23:43.292 ================ 00:23:43.292 Number of I/O Submission Queues: 127 00:23:43.292 Number of I/O Completion Queues: 127 00:23:43.292 00:23:43.292 Active Namespaces 00:23:43.292 ================= 00:23:43.292 Namespace ID:1 00:23:43.292 Error Recovery Timeout: Unlimited 00:23:43.292 Command Set Identifier: NVM (00h) 00:23:43.292 Deallocate: Supported 00:23:43.292 Deallocated/Unwritten Error: Not Supported 00:23:43.292 Deallocated Read Value: Unknown 00:23:43.292 Deallocate in Write Zeroes: Not Supported 00:23:43.292 Deallocated Guard Field: 0xFFFF 00:23:43.292 Flush: Supported 00:23:43.292 Reservation: Supported 00:23:43.292 Namespace Sharing Capabilities: Multiple Controllers 00:23:43.292 Size (in LBAs): 131072 (0GiB) 00:23:43.292 Capacity (in LBAs): 131072 (0GiB) 00:23:43.292 Utilization (in LBAs): 131072 (0GiB) 00:23:43.292 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:43.292 EUI64: ABCDEF0123456789 00:23:43.292 UUID: 4c2bc66a-be73-489c-a1e1-24b41b24a9d1 00:23:43.292 Thin Provisioning: Not Supported 00:23:43.292 Per-NS Atomic Units: Yes 00:23:43.292 Atomic Boundary Size (Normal): 0 00:23:43.292 Atomic Boundary Size (PFail): 0 00:23:43.292 Atomic Boundary Offset: 0 00:23:43.292 Maximum Single Source Range Length: 65535 00:23:43.292 Maximum Copy Length: 65535 00:23:43.292 Maximum Source Range Count: 1 00:23:43.292 NGUID/EUI64 Never Reused: No 00:23:43.292 Namespace Write Protected: No 00:23:43.292 Number of LBA Formats: 1 00:23:43.292 Current LBA Format: LBA Format #00 00:23:43.292 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:43.292 00:23:43.292 20:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:43.292 20:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:43.292 20:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.292 20:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:43.292 20:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.292 20:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:43.292 20:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:43.292 20:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:43.292 20:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:23:43.292 20:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:43.292 20:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:23:43.292 20:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:43.292 20:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:43.292 rmmod nvme_tcp 00:23:43.292 rmmod nvme_fabrics 00:23:43.292 rmmod nvme_keyring 00:23:43.292 20:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:43.292 20:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:23:43.292 20:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:23:43.292 20:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3768427 ']' 00:23:43.292 20:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3768427 00:23:43.292 20:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 3768427 ']' 00:23:43.292 20:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 3768427 00:23:43.292 20:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:23:43.292 20:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:43.292 20:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3768427 00:23:43.292 20:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:43.292 20:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:43.292 20:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3768427' 00:23:43.292 killing process with pid 3768427 00:23:43.292 20:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 3768427 00:23:43.292 20:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 3768427 00:23:43.553 20:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:43.553 20:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:43.553 20:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:43.553 20:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:43.553 20:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:43.553 20:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.553 20:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:43.553 20:03:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.468 20:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:45.468 00:23:45.468 real 0m11.173s 00:23:45.468 user 0m8.005s 00:23:45.468 sys 0m5.860s 00:23:45.468 20:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:45.469 20:03:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:45.469 ************************************ 00:23:45.469 END TEST nvmf_identify 00:23:45.469 ************************************ 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.730 ************************************ 00:23:45.730 START TEST nvmf_perf 00:23:45.730 ************************************ 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:45.730 * Looking for test storage... 00:23:45.730 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.730 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:45.731 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:45.731 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:23:45.731 20:03:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:52.385 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:52.385 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:52.385 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:52.385 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:52.385 20:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:52.385 20:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:52.385 20:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:52.385 20:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:52.385 20:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:52.385 20:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:52.385 20:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:52.385 20:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:52.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:52.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.729 ms 00:23:52.385 00:23:52.385 --- 10.0.0.2 ping statistics --- 00:23:52.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.385 rtt min/avg/max/mdev = 0.729/0.729/0.729/0.000 ms 00:23:52.385 20:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:52.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:52.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.399 ms 00:23:52.385 00:23:52.385 --- 10.0.0.1 ping statistics --- 00:23:52.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.385 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:23:52.385 20:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:52.385 20:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:23:52.385 20:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:52.385 20:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:52.385 20:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:52.385 20:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:52.385 20:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:52.385 20:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:52.385 20:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:52.386 20:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:52.386 20:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:52.386 20:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:52.386 20:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:52.386 20:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3772720 00:23:52.386 20:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3772720 00:23:52.386 20:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 3772720 ']' 00:23:52.386 20:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.386 20:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:52.386 20:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.386 20:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:52.386 20:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:52.386 20:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:52.386 [2024-07-24 20:03:40.328869] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:23:52.386 [2024-07-24 20:03:40.328936] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.647 EAL: No free 2048 kB hugepages reported on node 1 00:23:52.647 [2024-07-24 20:03:40.399991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:52.647 [2024-07-24 20:03:40.478364] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.647 [2024-07-24 20:03:40.478404] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.647 [2024-07-24 20:03:40.478413] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:52.647 [2024-07-24 20:03:40.478419] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:52.647 [2024-07-24 20:03:40.478425] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.647 [2024-07-24 20:03:40.478601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.647 [2024-07-24 20:03:40.478720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:52.647 [2024-07-24 20:03:40.478880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.647 [2024-07-24 20:03:40.478882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:53.220 20:03:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:53.220 20:03:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:23:53.220 20:03:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:53.220 20:03:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:53.220 20:03:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:53.220 20:03:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.220 20:03:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:53.220 20:03:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:53.793 20:03:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:53.793 20:03:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:54.054 20:03:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:23:54.054 20:03:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:54.054 20:03:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:54.054 20:03:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:23:54.054 20:03:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:54.054 20:03:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:54.054 20:03:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:54.315 [2024-07-24 20:03:42.117355] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:54.315 20:03:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:54.576 20:03:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:54.576 20:03:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:54.576 20:03:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:54.576 20:03:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:54.837 20:03:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:55.097 [2024-07-24 20:03:42.799822] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:55.097 20:03:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:55.097 20:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:23:55.097 20:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:23:55.097 20:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:55.097 20:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:23:56.482 Initializing NVMe Controllers 00:23:56.482 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:23:56.482 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:23:56.482 Initialization complete. Launching workers. 00:23:56.482 ======================================================== 00:23:56.482 Latency(us) 00:23:56.482 Device Information : IOPS MiB/s Average min max 00:23:56.482 PCIE (0000:65:00.0) NSID 1 from core 0: 80168.53 313.16 398.61 13.29 5261.48 00:23:56.482 ======================================================== 00:23:56.482 Total : 80168.53 313.16 398.61 13.29 5261.48 00:23:56.482 00:23:56.482 20:03:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:56.482 EAL: No free 2048 kB hugepages reported on node 1 00:23:57.869 Initializing NVMe Controllers 00:23:57.869 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:57.869 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:57.869 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:57.869 Initialization complete. Launching workers. 00:23:57.869 ======================================================== 00:23:57.869 Latency(us) 00:23:57.869 Device Information : IOPS MiB/s Average min max 00:23:57.869 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 102.63 0.40 9898.72 522.35 45505.99 00:23:57.869 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 50.81 0.20 19835.08 7001.77 55869.72 00:23:57.869 ======================================================== 00:23:57.869 Total : 153.44 0.60 13189.33 522.35 55869.72 00:23:57.869 00:23:57.869 20:03:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:57.869 EAL: No free 2048 kB hugepages reported on node 1 00:23:59.254 Initializing NVMe Controllers 00:23:59.254 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:59.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:59.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:59.254 Initialization complete. Launching workers. 00:23:59.254 ======================================================== 00:23:59.254 Latency(us) 00:23:59.254 Device Information : IOPS MiB/s Average min max 00:23:59.254 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11215.37 43.81 2853.09 465.12 6845.74 00:23:59.254 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3792.79 14.82 8499.31 6696.94 16212.27 00:23:59.254 ======================================================== 00:23:59.254 Total : 15008.16 58.63 4279.97 465.12 16212.27 00:23:59.254 00:23:59.254 20:03:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:59.254 20:03:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:59.254 20:03:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:59.254 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.800 Initializing NVMe Controllers 00:24:01.800 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:01.800 Controller IO queue size 128, less than required. 00:24:01.800 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:01.800 Controller IO queue size 128, less than required. 00:24:01.800 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:01.800 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:01.800 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:01.800 Initialization complete. Launching workers. 00:24:01.800 ======================================================== 00:24:01.800 Latency(us) 00:24:01.800 Device Information : IOPS MiB/s Average min max 00:24:01.800 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 887.49 221.87 149989.44 100320.22 214553.08 00:24:01.800 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 574.99 143.75 228722.35 71467.36 325303.55 00:24:01.800 ======================================================== 00:24:01.800 Total : 1462.48 365.62 180944.26 71467.36 325303.55 00:24:01.800 00:24:01.801 20:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:02.062 EAL: No free 2048 kB hugepages reported on node 1 00:24:02.062 No valid NVMe controllers or AIO or URING devices found 00:24:02.062 Initializing NVMe Controllers 00:24:02.062 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:02.062 Controller IO queue size 128, less than required. 00:24:02.062 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:02.062 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:02.062 Controller IO queue size 128, less than required. 00:24:02.062 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:02.062 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:02.062 WARNING: Some requested NVMe devices were skipped 00:24:02.062 20:03:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:02.062 EAL: No free 2048 kB hugepages reported on node 1 00:24:04.610 Initializing NVMe Controllers 00:24:04.610 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:04.610 Controller IO queue size 128, less than required. 00:24:04.610 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:04.610 Controller IO queue size 128, less than required. 00:24:04.610 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:04.610 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:04.610 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:04.610 Initialization complete. Launching workers. 00:24:04.610 00:24:04.610 ==================== 00:24:04.610 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:04.610 TCP transport: 00:24:04.610 polls: 43974 00:24:04.610 idle_polls: 14630 00:24:04.610 sock_completions: 29344 00:24:04.610 nvme_completions: 3617 00:24:04.610 submitted_requests: 5440 00:24:04.610 queued_requests: 1 00:24:04.610 00:24:04.610 ==================== 00:24:04.610 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:04.610 TCP transport: 00:24:04.610 polls: 41060 00:24:04.610 idle_polls: 10205 00:24:04.610 sock_completions: 30855 00:24:04.610 nvme_completions: 4011 00:24:04.610 submitted_requests: 5934 00:24:04.610 queued_requests: 1 00:24:04.610 ======================================================== 00:24:04.610 Latency(us) 00:24:04.610 Device Information : IOPS MiB/s Average min max 00:24:04.610 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 904.00 226.00 146615.61 76478.03 236566.97 00:24:04.610 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1002.50 250.62 130616.51 60697.38 188452.84 00:24:04.610 ======================================================== 00:24:04.610 Total : 1906.50 476.62 138202.76 60697.38 236566.97 00:24:04.610 00:24:04.610 20:03:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:04.610 20:03:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:04.871 20:03:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:04.871 20:03:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:04.871 20:03:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:04.871 20:03:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:04.871 20:03:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:04.871 20:03:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:04.871 20:03:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:04.871 20:03:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:04.871 20:03:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:04.871 rmmod nvme_tcp 00:24:04.871 rmmod nvme_fabrics 00:24:04.871 rmmod nvme_keyring 00:24:04.871 20:03:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:04.871 20:03:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:04.871 20:03:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:04.871 20:03:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3772720 ']' 00:24:04.871 20:03:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3772720 00:24:04.871 20:03:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 3772720 ']' 00:24:04.871 20:03:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 3772720 00:24:04.871 20:03:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:24:04.871 20:03:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:04.871 20:03:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3772720 00:24:04.871 20:03:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:04.871 20:03:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:04.871 20:03:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3772720' 00:24:04.871 killing process with pid 3772720 00:24:04.871 20:03:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 3772720 00:24:04.871 20:03:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 3772720 00:24:07.416 20:03:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:07.416 20:03:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:07.416 20:03:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:07.416 20:03:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:07.416 20:03:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:07.416 20:03:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.416 20:03:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:07.416 20:03:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.332 20:03:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:09.332 00:24:09.332 real 0m23.376s 00:24:09.332 user 0m58.294s 00:24:09.332 sys 0m7.464s 00:24:09.332 20:03:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:09.332 20:03:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:09.332 ************************************ 00:24:09.332 END TEST nvmf_perf 00:24:09.332 ************************************ 00:24:09.332 20:03:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:09.332 20:03:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:09.332 20:03:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:09.332 20:03:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.332 ************************************ 00:24:09.332 START TEST nvmf_fio_host 00:24:09.332 ************************************ 00:24:09.332 20:03:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:09.332 * Looking for test storage... 00:24:09.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:09.332 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:09.332 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:09.332 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:09.332 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:09.332 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.332 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:09.333 20:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:15.949 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:15.949 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:15.949 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:15.949 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:15.949 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:15.950 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:15.950 20:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:16.211 20:04:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:16.211 20:04:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:16.211 20:04:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:16.211 20:04:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:16.211 20:04:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:16.211 20:04:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:16.211 20:04:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:16.211 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:16.211 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.727 ms 00:24:16.211 00:24:16.211 --- 10.0.0.2 ping statistics --- 00:24:16.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.211 rtt min/avg/max/mdev = 0.727/0.727/0.727/0.000 ms 00:24:16.211 20:04:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:16.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:16.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.359 ms 00:24:16.472 00:24:16.472 --- 10.0.0.1 ping statistics --- 00:24:16.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.472 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:24:16.472 20:04:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:16.472 20:04:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:24:16.472 20:04:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:16.472 20:04:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:16.472 20:04:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:16.472 20:04:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:16.472 20:04:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:16.472 20:04:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:16.472 20:04:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:16.472 20:04:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:16.472 20:04:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:16.472 20:04:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:16.472 20:04:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.472 20:04:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3779633 00:24:16.472 20:04:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:16.472 20:04:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3779633 00:24:16.472 20:04:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 3779633 ']' 00:24:16.472 20:04:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.472 20:04:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:16.472 20:04:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.472 20:04:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:16.472 20:04:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.472 20:04:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:16.472 [2024-07-24 20:04:04.269042] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:24:16.472 [2024-07-24 20:04:04.269111] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:16.472 EAL: No free 2048 kB hugepages reported on node 1 00:24:16.472 [2024-07-24 20:04:04.339466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:16.472 [2024-07-24 20:04:04.414244] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:16.472 [2024-07-24 20:04:04.414283] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:16.472 [2024-07-24 20:04:04.414291] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:16.472 [2024-07-24 20:04:04.414297] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:16.472 [2024-07-24 20:04:04.414304] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:16.472 [2024-07-24 20:04:04.414436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.472 [2024-07-24 20:04:04.414607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:16.472 [2024-07-24 20:04:04.414768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.472 [2024-07-24 20:04:04.414767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:17.413 20:04:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:17.413 20:04:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:24:17.413 20:04:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:17.413 [2024-07-24 20:04:05.185439] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:17.413 20:04:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:17.413 20:04:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:17.413 20:04:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.413 20:04:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:17.673 Malloc1 00:24:17.673 20:04:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:17.673 20:04:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:17.934 20:04:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:18.195 [2024-07-24 20:04:05.898936] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:18.195 20:04:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:18.195 20:04:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:18.195 20:04:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:18.195 20:04:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:18.195 20:04:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:18.195 20:04:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:18.195 20:04:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:18.195 20:04:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:18.195 20:04:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:18.195 20:04:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:18.195 20:04:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:18.195 20:04:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:18.195 20:04:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:18.195 20:04:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:18.195 20:04:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:18.195 20:04:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:18.195 20:04:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:18.195 20:04:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:18.195 20:04:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:18.195 20:04:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:18.482 20:04:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:18.482 20:04:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:18.482 20:04:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:18.482 20:04:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:18.752 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:18.752 fio-3.35 00:24:18.752 Starting 1 thread 00:24:18.752 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.321 00:24:21.321 test: (groupid=0, jobs=1): err= 0: pid=3780455: Wed Jul 24 20:04:08 2024 00:24:21.321 read: IOPS=13.8k, BW=53.9MiB/s (56.5MB/s)(108MiB/2004msec) 00:24:21.321 slat (usec): min=2, max=279, avg= 2.17, stdev= 2.39 00:24:21.321 clat (usec): min=2918, max=11530, avg=5308.37, stdev=939.01 00:24:21.321 lat (usec): min=2920, max=11532, avg=5310.54, stdev=939.05 00:24:21.321 clat percentiles (usec): 00:24:21.321 | 1.00th=[ 3752], 5.00th=[ 4228], 10.00th=[ 4490], 20.00th=[ 4686], 00:24:21.321 | 30.00th=[ 4883], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5276], 00:24:21.321 | 70.00th=[ 5407], 80.00th=[ 5735], 90.00th=[ 6456], 95.00th=[ 7177], 00:24:21.321 | 99.00th=[ 8979], 99.50th=[ 9634], 99.90th=[10683], 99.95th=[11076], 00:24:21.321 | 99.99th=[11338] 00:24:21.321 bw ( KiB/s): min=54496, max=55528, per=99.93%, avg=55172.00, stdev=474.03, samples=4 00:24:21.321 iops : min=13624, max=13882, avg=13793.00, stdev=118.51, samples=4 00:24:21.321 write: IOPS=13.8k, BW=53.9MiB/s (56.5MB/s)(108MiB/2004msec); 0 zone resets 00:24:21.321 slat (usec): min=2, max=263, avg= 2.24, stdev= 1.75 00:24:21.321 clat (usec): min=1988, max=7119, avg=3923.20, stdev=542.97 00:24:21.321 lat (usec): min=1990, max=7121, avg=3925.44, stdev=543.06 00:24:21.321 clat percentiles (usec): 00:24:21.321 | 1.00th=[ 2540], 5.00th=[ 2933], 10.00th=[ 3228], 20.00th=[ 3523], 00:24:21.321 | 30.00th=[ 3720], 40.00th=[ 3851], 50.00th=[ 3982], 60.00th=[ 4047], 00:24:21.321 | 70.00th=[ 4178], 80.00th=[ 4293], 90.00th=[ 4490], 95.00th=[ 4752], 00:24:21.321 | 99.00th=[ 5473], 99.50th=[ 5669], 99.90th=[ 6194], 99.95th=[ 6390], 00:24:21.321 | 99.99th=[ 6587] 00:24:21.321 bw ( KiB/s): min=54832, max=55512, per=100.00%, avg=55158.00, stdev=287.21, samples=4 00:24:21.321 iops : min=13708, max=13878, avg=13789.50, stdev=71.80, samples=4 00:24:21.321 lat (msec) : 2=0.01%, 4=27.80%, 10=72.05%, 20=0.14% 00:24:21.321 cpu : usr=69.30%, sys=24.51%, ctx=22, majf=0, minf=6 00:24:21.321 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:21.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.321 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:21.321 issued rwts: total=27661,27631,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.321 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:21.321 00:24:21.321 Run status group 0 (all jobs): 00:24:21.321 READ: bw=53.9MiB/s (56.5MB/s), 53.9MiB/s-53.9MiB/s (56.5MB/s-56.5MB/s), io=108MiB (113MB), run=2004-2004msec 00:24:21.321 WRITE: bw=53.9MiB/s (56.5MB/s), 53.9MiB/s-53.9MiB/s (56.5MB/s-56.5MB/s), io=108MiB (113MB), run=2004-2004msec 00:24:21.321 20:04:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:21.321 20:04:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:21.321 20:04:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:21.321 20:04:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:21.321 20:04:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:21.321 20:04:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:21.321 20:04:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:21.321 20:04:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:21.321 20:04:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:21.321 20:04:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:21.321 20:04:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:21.322 20:04:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:21.322 20:04:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:21.322 20:04:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:21.322 20:04:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:21.322 20:04:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:21.322 20:04:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:21.322 20:04:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:21.322 20:04:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:21.322 20:04:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:21.322 20:04:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:21.322 20:04:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:21.588 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:21.588 fio-3.35 00:24:21.588 Starting 1 thread 00:24:21.588 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.136 00:24:24.136 test: (groupid=0, jobs=1): err= 0: pid=3780996: Wed Jul 24 20:04:11 2024 00:24:24.136 read: IOPS=8783, BW=137MiB/s (144MB/s)(275MiB/2005msec) 00:24:24.136 slat (usec): min=3, max=118, avg= 3.62, stdev= 1.46 00:24:24.136 clat (usec): min=2603, max=20824, avg=9136.52, stdev=2476.92 00:24:24.136 lat (usec): min=2606, max=20831, avg=9140.14, stdev=2477.19 00:24:24.136 clat percentiles (usec): 00:24:24.136 | 1.00th=[ 4621], 5.00th=[ 5604], 10.00th=[ 6194], 20.00th=[ 6980], 00:24:24.136 | 30.00th=[ 7570], 40.00th=[ 8160], 50.00th=[ 8848], 60.00th=[ 9634], 00:24:24.136 | 70.00th=[10552], 80.00th=[11338], 90.00th=[12125], 95.00th=[13304], 00:24:24.136 | 99.00th=[16319], 99.50th=[17171], 99.90th=[18482], 99.95th=[18482], 00:24:24.136 | 99.99th=[19006] 00:24:24.136 bw ( KiB/s): min=58432, max=83488, per=49.91%, avg=70144.00, stdev=12279.52, samples=4 00:24:24.136 iops : min= 3652, max= 5218, avg=4384.00, stdev=767.47, samples=4 00:24:24.136 write: IOPS=5211, BW=81.4MiB/s (85.4MB/s)(144MiB/1766msec); 0 zone resets 00:24:24.136 slat (usec): min=39, max=338, avg=41.07, stdev= 7.53 00:24:24.136 clat (usec): min=2835, max=19070, avg=9760.70, stdev=1879.05 00:24:24.136 lat (usec): min=2876, max=19219, avg=9801.78, stdev=1881.76 00:24:24.136 clat percentiles (usec): 00:24:24.136 | 1.00th=[ 6325], 5.00th=[ 7308], 10.00th=[ 7767], 20.00th=[ 8291], 00:24:24.136 | 30.00th=[ 8717], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[10028], 00:24:24.136 | 70.00th=[10421], 80.00th=[10945], 90.00th=[11863], 95.00th=[12518], 00:24:24.136 | 99.00th=[17433], 99.50th=[17695], 99.90th=[18220], 99.95th=[18482], 00:24:24.136 | 99.99th=[19006] 00:24:24.136 bw ( KiB/s): min=61632, max=86944, per=87.67%, avg=73096.00, stdev=12543.68, samples=4 00:24:24.136 iops : min= 3852, max= 5434, avg=4568.50, stdev=783.98, samples=4 00:24:24.136 lat (msec) : 4=0.34%, 10=62.80%, 20=36.86%, 50=0.01% 00:24:24.136 cpu : usr=82.39%, sys=13.57%, ctx=9, majf=0, minf=19 00:24:24.136 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:24.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:24.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:24.136 issued rwts: total=17611,9203,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:24.136 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:24.136 00:24:24.136 Run status group 0 (all jobs): 00:24:24.136 READ: bw=137MiB/s (144MB/s), 137MiB/s-137MiB/s (144MB/s-144MB/s), io=275MiB (289MB), run=2005-2005msec 00:24:24.136 WRITE: bw=81.4MiB/s (85.4MB/s), 81.4MiB/s-81.4MiB/s (85.4MB/s-85.4MB/s), io=144MiB (151MB), run=1766-1766msec 00:24:24.136 20:04:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:24.136 20:04:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:24.136 20:04:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:24.136 20:04:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:24.136 20:04:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:24.136 20:04:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:24.136 20:04:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:24:24.136 20:04:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:24.136 20:04:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:24:24.136 20:04:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:24.136 20:04:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:24.136 rmmod nvme_tcp 00:24:24.136 rmmod nvme_fabrics 00:24:24.136 rmmod nvme_keyring 00:24:24.136 20:04:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:24.136 20:04:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:24:24.136 20:04:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:24:24.136 20:04:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3779633 ']' 00:24:24.136 20:04:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3779633 00:24:24.136 20:04:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 3779633 ']' 00:24:24.136 20:04:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 3779633 00:24:24.136 20:04:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:24:24.136 20:04:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:24.136 20:04:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3779633 00:24:24.136 20:04:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:24.136 20:04:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:24.136 20:04:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3779633' 00:24:24.136 killing process with pid 3779633 00:24:24.136 20:04:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 3779633 00:24:24.136 20:04:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 3779633 00:24:24.397 20:04:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:24.397 20:04:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:24.397 20:04:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:24.397 20:04:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:24.397 20:04:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:24.397 20:04:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:24.397 20:04:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:24.397 20:04:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.312 20:04:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:26.312 00:24:26.312 real 0m17.238s 00:24:26.312 user 1m7.853s 00:24:26.312 sys 0m7.133s 00:24:26.312 20:04:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:26.312 20:04:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.312 ************************************ 00:24:26.312 END TEST nvmf_fio_host 00:24:26.312 ************************************ 00:24:26.312 20:04:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:26.312 20:04:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:26.312 20:04:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:26.312 20:04:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.312 ************************************ 00:24:26.312 START TEST nvmf_failover 00:24:26.312 ************************************ 00:24:26.312 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:26.573 * Looking for test storage... 00:24:26.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:26.573 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:26.573 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:26.573 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:26.573 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:26.573 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:24:26.574 20:04:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:34.725 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:34.725 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:34.725 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:34.725 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:34.725 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:34.726 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:34.726 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:34.726 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:34.726 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:34.726 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:34.726 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:34.726 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:34.726 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:34.726 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:34.726 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:34.726 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:34.726 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:34.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:34.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.697 ms 00:24:34.726 00:24:34.726 --- 10.0.0.2 ping statistics --- 00:24:34.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.726 rtt min/avg/max/mdev = 0.697/0.697/0.697/0.000 ms 00:24:34.726 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:34.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:34.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.393 ms 00:24:34.726 00:24:34.726 --- 10.0.0.1 ping statistics --- 00:24:34.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.726 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:24:34.726 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:34.726 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:24:34.726 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:34.726 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:34.726 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:34.726 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:34.726 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:34.726 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:34.726 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:34.726 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:34.726 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:34.726 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:34.726 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:34.726 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3785640 00:24:34.726 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3785640 00:24:34.726 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:34.726 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3785640 ']' 00:24:34.726 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.726 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:34.726 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.726 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:34.726 20:04:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:34.726 [2024-07-24 20:04:21.583892] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:24:34.726 [2024-07-24 20:04:21.583960] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.726 EAL: No free 2048 kB hugepages reported on node 1 00:24:34.726 [2024-07-24 20:04:21.669415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:34.726 [2024-07-24 20:04:21.763308] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:34.726 [2024-07-24 20:04:21.763365] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:34.726 [2024-07-24 20:04:21.763373] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:34.726 [2024-07-24 20:04:21.763380] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:34.726 [2024-07-24 20:04:21.763386] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:34.726 [2024-07-24 20:04:21.763511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:34.726 [2024-07-24 20:04:21.763812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:34.726 [2024-07-24 20:04:21.763812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:34.726 20:04:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:34.726 20:04:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:34.726 20:04:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:34.726 20:04:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:34.726 20:04:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:34.726 20:04:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:34.726 20:04:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:34.726 [2024-07-24 20:04:22.529469] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:34.726 20:04:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:34.988 Malloc0 00:24:34.988 20:04:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:34.988 20:04:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:35.249 20:04:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:35.510 [2024-07-24 20:04:23.231141] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:35.510 20:04:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:35.510 [2024-07-24 20:04:23.403581] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:35.510 20:04:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:35.774 [2024-07-24 20:04:23.564063] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:35.774 20:04:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:35.774 20:04:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3786003 00:24:35.774 20:04:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:35.774 20:04:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3786003 /var/tmp/bdevperf.sock 00:24:35.774 20:04:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3786003 ']' 00:24:35.774 20:04:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:35.774 20:04:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:35.774 20:04:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:35.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:35.774 20:04:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:35.774 20:04:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:36.719 20:04:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:36.719 20:04:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:36.719 20:04:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:36.719 NVMe0n1 00:24:36.719 20:04:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:36.980 00:24:36.980 20:04:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:36.980 20:04:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3786345 00:24:36.980 20:04:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:38.365 20:04:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:38.365 [2024-07-24 20:04:26.081828] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.365 [2024-07-24 20:04:26.081873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.365 [2024-07-24 20:04:26.081879] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.365 [2024-07-24 20:04:26.081884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.365 [2024-07-24 20:04:26.081889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.365 [2024-07-24 20:04:26.081893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.365 [2024-07-24 20:04:26.081898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.365 [2024-07-24 20:04:26.081902] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.365 [2024-07-24 20:04:26.081907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.365 [2024-07-24 20:04:26.081911] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.366 [2024-07-24 20:04:26.081915] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.366 [2024-07-24 20:04:26.081920] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.366 [2024-07-24 20:04:26.081924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.366 [2024-07-24 20:04:26.081929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.366 [2024-07-24 20:04:26.081933] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.366 [2024-07-24 20:04:26.081937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.366 [2024-07-24 20:04:26.081942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.366 [2024-07-24 20:04:26.081946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.366 [2024-07-24 20:04:26.081950] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.366 [2024-07-24 20:04:26.081955] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.366 [2024-07-24 20:04:26.081959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.366 [2024-07-24 20:04:26.081963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.366 [2024-07-24 20:04:26.081967] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.366 [2024-07-24 20:04:26.081976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.366 [2024-07-24 20:04:26.081981] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.366 [2024-07-24 20:04:26.081985] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.366 [2024-07-24 20:04:26.081989] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.366 [2024-07-24 20:04:26.081994] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.366 [2024-07-24 20:04:26.081999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.366 [2024-07-24 20:04:26.082003] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.366 [2024-07-24 20:04:26.082008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.366 [2024-07-24 20:04:26.082012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.366 [2024-07-24 20:04:26.082016] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.366 [2024-07-24 20:04:26.082021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.366 [2024-07-24 20:04:26.082025] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.366 [2024-07-24 20:04:26.082029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.366 [2024-07-24 20:04:26.082034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.366 [2024-07-24 20:04:26.082038] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.366 [2024-07-24 20:04:26.082043] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.366 [2024-07-24 20:04:26.082047] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.366 [2024-07-24 20:04:26.082051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.366 [2024-07-24 20:04:26.082055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.366 [2024-07-24 20:04:26.082060] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.366 [2024-07-24 20:04:26.082064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.366 [2024-07-24 20:04:26.082068] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.366 [2024-07-24 20:04:26.082072] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.366 [2024-07-24 20:04:26.082077] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.366 [2024-07-24 20:04:26.082081] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.366 [2024-07-24 20:04:26.082086] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb80 is same with the state(5) to be set 00:24:38.366 20:04:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:41.668 20:04:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:41.668 00:24:41.668 20:04:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:41.668 [2024-07-24 20:04:29.522938] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ce990 is same with the state(5) to be set 00:24:41.669 [2024-07-24 20:04:29.522976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ce990 is same with the state(5) to be set 00:24:41.669 [2024-07-24 20:04:29.522982] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ce990 is same with the state(5) to be set 00:24:41.669 [2024-07-24 20:04:29.522987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ce990 is same with the state(5) to be set 00:24:41.669 [2024-07-24 20:04:29.522991] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ce990 is same with the state(5) to be set 00:24:41.669 [2024-07-24 20:04:29.522996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ce990 is same with the state(5) to be set 00:24:41.669 [2024-07-24 20:04:29.523001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ce990 is same with the state(5) to be set 00:24:41.669 [2024-07-24 20:04:29.523005] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ce990 is same with the state(5) to be set 00:24:41.669 [2024-07-24 20:04:29.523010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ce990 is same with the state(5) to be set 00:24:41.669 [2024-07-24 20:04:29.523014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ce990 is same with the state(5) to be set 00:24:41.669 [2024-07-24 20:04:29.523019] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ce990 is same with the state(5) to be set 00:24:41.669 [2024-07-24 20:04:29.523024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ce990 is same with the state(5) to be set 00:24:41.669 [2024-07-24 20:04:29.523028] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ce990 is same with the state(5) to be set 00:24:41.669 [2024-07-24 20:04:29.523032] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ce990 is same with the state(5) to be set 00:24:41.669 [2024-07-24 20:04:29.523037] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ce990 is same with the state(5) to be set 00:24:41.669 20:04:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:44.970 20:04:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:44.970 [2024-07-24 20:04:32.698098] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:44.970 20:04:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:45.912 20:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:46.173 [2024-07-24 20:04:33.873954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf870 is same with the state(5) to be set 00:24:46.173 [2024-07-24 20:04:33.873990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf870 is same with the state(5) to be set 00:24:46.173 [2024-07-24 20:04:33.873996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf870 is same with the state(5) to be set 00:24:46.173 [2024-07-24 20:04:33.874001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf870 is same with the state(5) to be set 00:24:46.173 [2024-07-24 20:04:33.874005] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf870 is same with the state(5) to be set 00:24:46.173 [2024-07-24 20:04:33.874015] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf870 is same with the state(5) to be set 00:24:46.173 [2024-07-24 20:04:33.874019] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf870 is same with the state(5) to be set 00:24:46.173 [2024-07-24 20:04:33.874024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf870 is same with the state(5) to be set 00:24:46.173 [2024-07-24 20:04:33.874028] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf870 is same with the state(5) to be set 00:24:46.173 [2024-07-24 20:04:33.874033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf870 is same with the state(5) to be set 00:24:46.173 [2024-07-24 20:04:33.874037] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf870 is same with the state(5) to be set 00:24:46.173 [2024-07-24 20:04:33.874042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf870 is same with the state(5) to be set 00:24:46.173 [2024-07-24 20:04:33.874046] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf870 is same with the state(5) to be set 00:24:46.173 [2024-07-24 20:04:33.874050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf870 is same with the state(5) to be set 00:24:46.173 [2024-07-24 20:04:33.874055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf870 is same with the state(5) to be set 00:24:46.173 [2024-07-24 20:04:33.874059] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf870 is same with the state(5) to be set 00:24:46.173 [2024-07-24 20:04:33.874063] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf870 is same with the state(5) to be set 00:24:46.173 [2024-07-24 20:04:33.874068] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf870 is same with the state(5) to be set 00:24:46.173 [2024-07-24 20:04:33.874072] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf870 is same with the state(5) to be set 00:24:46.173 [2024-07-24 20:04:33.874076] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf870 is same with the state(5) to be set 00:24:46.173 [2024-07-24 20:04:33.874080] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf870 is same with the state(5) to be set 00:24:46.173 [2024-07-24 20:04:33.874086] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf870 is same with the state(5) to be set 00:24:46.173 [2024-07-24 20:04:33.874090] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf870 is same with the state(5) to be set 00:24:46.173 [2024-07-24 20:04:33.874094] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf870 is same with the state(5) to be set 00:24:46.173 [2024-07-24 20:04:33.874099] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf870 is same with the state(5) to be set 00:24:46.173 [2024-07-24 20:04:33.874103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf870 is same with the state(5) to be set 00:24:46.173 [2024-07-24 20:04:33.874108] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf870 is same with the state(5) to be set 00:24:46.173 20:04:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3786345 00:24:52.765 0 00:24:52.765 20:04:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3786003 00:24:52.765 20:04:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3786003 ']' 00:24:52.765 20:04:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3786003 00:24:52.765 20:04:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:52.765 20:04:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:52.765 20:04:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3786003 00:24:52.765 20:04:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:52.765 20:04:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:52.765 20:04:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3786003' 00:24:52.765 killing process with pid 3786003 00:24:52.765 20:04:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3786003 00:24:52.765 20:04:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3786003 00:24:52.765 20:04:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:52.765 [2024-07-24 20:04:23.631926] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:24:52.765 [2024-07-24 20:04:23.631980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3786003 ] 00:24:52.765 EAL: No free 2048 kB hugepages reported on node 1 00:24:52.765 [2024-07-24 20:04:23.690558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.765 [2024-07-24 20:04:23.754619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:52.765 Running I/O for 15 seconds... 00:24:52.765 [2024-07-24 20:04:26.082792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.765 [2024-07-24 20:04:26.082826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.765 [2024-07-24 20:04:26.082843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:100136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.765 [2024-07-24 20:04:26.082851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.765 [2024-07-24 20:04:26.082861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:100144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.765 [2024-07-24 20:04:26.082869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.765 [2024-07-24 20:04:26.082878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:100152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.765 [2024-07-24 20:04:26.082885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.765 [2024-07-24 20:04:26.082894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.765 [2024-07-24 20:04:26.082902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.765 [2024-07-24 20:04:26.082911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:100168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.765 [2024-07-24 20:04:26.082918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.765 [2024-07-24 20:04:26.082928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:100176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.765 [2024-07-24 20:04:26.082935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.765 [2024-07-24 20:04:26.082944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:100184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.765 [2024-07-24 20:04:26.082951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.765 [2024-07-24 20:04:26.082961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:100192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.765 [2024-07-24 20:04:26.082969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.765 [2024-07-24 20:04:26.082978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:100200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.765 [2024-07-24 20:04:26.082985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.765 [2024-07-24 20:04:26.082994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.765 [2024-07-24 20:04:26.083001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.765 [2024-07-24 20:04:26.083016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.765 [2024-07-24 20:04:26.083023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.765 [2024-07-24 20:04:26.083032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:100224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.765 [2024-07-24 20:04:26.083039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.765 [2024-07-24 20:04:26.083048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:100232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.765 [2024-07-24 20:04:26.083056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.765 [2024-07-24 20:04:26.083065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:100240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.765 [2024-07-24 20:04:26.083072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.765 [2024-07-24 20:04:26.083081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.765 [2024-07-24 20:04:26.083088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.765 [2024-07-24 20:04:26.083097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:100256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.765 [2024-07-24 20:04:26.083105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.765 [2024-07-24 20:04:26.083114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:100264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.765 [2024-07-24 20:04:26.083121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.765 [2024-07-24 20:04:26.083130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.765 [2024-07-24 20:04:26.083137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.765 [2024-07-24 20:04:26.083146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:100280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.765 [2024-07-24 20:04:26.083153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.765 [2024-07-24 20:04:26.083163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:100288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.765 [2024-07-24 20:04:26.083170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.765 [2024-07-24 20:04:26.083179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.765 [2024-07-24 20:04:26.083186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.765 [2024-07-24 20:04:26.083195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.765 [2024-07-24 20:04:26.083207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.766 [2024-07-24 20:04:26.083217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.766 [2024-07-24 20:04:26.083226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.766 [2024-07-24 20:04:26.083235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.766 [2024-07-24 20:04:26.083242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.766 [2024-07-24 20:04:26.083251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.766 [2024-07-24 20:04:26.083258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.766 [2024-07-24 20:04:26.083267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.766 [2024-07-24 20:04:26.083274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.766 [2024-07-24 20:04:26.083283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.766 [2024-07-24 20:04:26.083290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.766 [2024-07-24 20:04:26.083299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:100352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.766 [2024-07-24 20:04:26.083307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.766 [2024-07-24 20:04:26.083316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.766 [2024-07-24 20:04:26.083323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.766 [2024-07-24 20:04:26.083332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.766 [2024-07-24 20:04:26.083339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.766 [2024-07-24 20:04:26.083349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:100376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.766 [2024-07-24 20:04:26.083357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.766 [2024-07-24 20:04:26.083366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.766 [2024-07-24 20:04:26.083374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.766 [2024-07-24 20:04:26.083383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.766 [2024-07-24 20:04:26.083390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.766 [2024-07-24 20:04:26.083400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.766 [2024-07-24 20:04:26.083407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.766 [2024-07-24 20:04:26.083417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.766 [2024-07-24 20:04:26.083424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.766 [2024-07-24 20:04:26.083435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.766 [2024-07-24 20:04:26.083442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.766 [2024-07-24 20:04:26.083451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.766 [2024-07-24 20:04:26.083458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.766 [2024-07-24 20:04:26.083468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.766 [2024-07-24 20:04:26.083474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.766 [2024-07-24 20:04:26.083484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.766 [2024-07-24 20:04:26.083490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.766 [2024-07-24 20:04:26.083501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.766 [2024-07-24 20:04:26.083507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.766 [2024-07-24 20:04:26.083517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:100456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.766 [2024-07-24 20:04:26.083524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.766 [2024-07-24 20:04:26.083533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.766 [2024-07-24 20:04:26.083540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.766 [2024-07-24 20:04:26.083549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:100472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.766 [2024-07-24 20:04:26.083557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.766 [2024-07-24 20:04:26.083566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:100480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.766 [2024-07-24 20:04:26.083573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.766 [2024-07-24 20:04:26.083582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.766 [2024-07-24 20:04:26.083590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.766 [2024-07-24 20:04:26.083599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.766 [2024-07-24 20:04:26.083606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.766 [2024-07-24 20:04:26.083615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:100504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.766 [2024-07-24 20:04:26.083622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.766 [2024-07-24 20:04:26.083631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:100512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.766 [2024-07-24 20:04:26.083640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.766 [2024-07-24 20:04:26.083648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:100520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.766 [2024-07-24 20:04:26.083656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.766 [2024-07-24 20:04:26.083665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.766 [2024-07-24 20:04:26.083672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.766 [2024-07-24 20:04:26.083681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.766 [2024-07-24 20:04:26.083688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.766 [2024-07-24 20:04:26.083698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:100544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.766 [2024-07-24 20:04:26.083705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.766 [2024-07-24 20:04:26.083714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:100552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.766 [2024-07-24 20:04:26.083721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.766 [2024-07-24 20:04:26.083730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:100560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.766 [2024-07-24 20:04:26.083737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.766 [2024-07-24 20:04:26.083747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:100568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.766 [2024-07-24 20:04:26.083754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.766 [2024-07-24 20:04:26.083763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:100576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.766 [2024-07-24 20:04:26.083770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.766 [2024-07-24 20:04:26.083780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:100584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.766 [2024-07-24 20:04:26.083787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.766 [2024-07-24 20:04:26.083796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:100592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.766 [2024-07-24 20:04:26.083803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.767 [2024-07-24 20:04:26.083812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:100600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.767 [2024-07-24 20:04:26.083819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.767 [2024-07-24 20:04:26.083828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.767 [2024-07-24 20:04:26.083835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.767 [2024-07-24 20:04:26.083846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.767 [2024-07-24 20:04:26.083853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.767 [2024-07-24 20:04:26.083862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:100624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.767 [2024-07-24 20:04:26.083870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.767 [2024-07-24 20:04:26.083879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.767 [2024-07-24 20:04:26.083886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.767 [2024-07-24 20:04:26.083895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:100640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.767 [2024-07-24 20:04:26.083902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.767 [2024-07-24 20:04:26.083911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:100648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.767 [2024-07-24 20:04:26.083920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.767 [2024-07-24 20:04:26.083930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:100656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.767 [2024-07-24 20:04:26.083937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.767 [2024-07-24 20:04:26.083947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:100664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.767 [2024-07-24 20:04:26.083954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.767 [2024-07-24 20:04:26.083963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:100672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.767 [2024-07-24 20:04:26.083970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.767 [2024-07-24 20:04:26.083980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:100680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.767 [2024-07-24 20:04:26.083987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.767 [2024-07-24 20:04:26.083996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:100688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.767 [2024-07-24 20:04:26.084003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.767 [2024-07-24 20:04:26.084012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.767 [2024-07-24 20:04:26.084020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.767 [2024-07-24 20:04:26.084029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.767 [2024-07-24 20:04:26.084036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.767 [2024-07-24 20:04:26.084045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:100712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.767 [2024-07-24 20:04:26.084053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.767 [2024-07-24 20:04:26.084062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:100720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.767 [2024-07-24 20:04:26.084070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.767 [2024-07-24 20:04:26.084079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:100728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.767 [2024-07-24 20:04:26.084086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.767 [2024-07-24 20:04:26.084095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:100736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.767 [2024-07-24 20:04:26.084102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.767 [2024-07-24 20:04:26.084111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.767 [2024-07-24 20:04:26.084118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.767 [2024-07-24 20:04:26.084127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.767 [2024-07-24 20:04:26.084134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.767 [2024-07-24 20:04:26.084143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:100760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.767 [2024-07-24 20:04:26.084150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.767 [2024-07-24 20:04:26.084160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:100768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.767 [2024-07-24 20:04:26.084167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.767 [2024-07-24 20:04:26.084176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:100776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.767 [2024-07-24 20:04:26.084183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.767 [2024-07-24 20:04:26.084192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.767 [2024-07-24 20:04:26.084199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.767 [2024-07-24 20:04:26.084213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.767 [2024-07-24 20:04:26.084220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.767 [2024-07-24 20:04:26.084229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.767 [2024-07-24 20:04:26.084236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.767 [2024-07-24 20:04:26.084245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.767 [2024-07-24 20:04:26.084251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.767 [2024-07-24 20:04:26.084262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.767 [2024-07-24 20:04:26.084269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.767 [2024-07-24 20:04:26.084278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.767 [2024-07-24 20:04:26.084285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.767 [2024-07-24 20:04:26.084294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.767 [2024-07-24 20:04:26.084301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.767 [2024-07-24 20:04:26.084311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.767 [2024-07-24 20:04:26.084319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.767 [2024-07-24 20:04:26.084328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.767 [2024-07-24 20:04:26.084334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.767 [2024-07-24 20:04:26.084343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.767 [2024-07-24 20:04:26.084350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.767 [2024-07-24 20:04:26.084360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.767 [2024-07-24 20:04:26.084367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.767 [2024-07-24 20:04:26.084375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.767 [2024-07-24 20:04:26.084382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.767 [2024-07-24 20:04:26.084391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.767 [2024-07-24 20:04:26.084399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.767 [2024-07-24 20:04:26.084407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.767 [2024-07-24 20:04:26.084415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.767 [2024-07-24 20:04:26.084424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.767 [2024-07-24 20:04:26.084431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.768 [2024-07-24 20:04:26.084440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.768 [2024-07-24 20:04:26.084447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.768 [2024-07-24 20:04:26.084457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.768 [2024-07-24 20:04:26.084464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.768 [2024-07-24 20:04:26.084474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.768 [2024-07-24 20:04:26.084482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.768 [2024-07-24 20:04:26.084491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.768 [2024-07-24 20:04:26.084498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.768 [2024-07-24 20:04:26.084507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.768 [2024-07-24 20:04:26.084515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.768 [2024-07-24 20:04:26.084524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.768 [2024-07-24 20:04:26.084531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.768 [2024-07-24 20:04:26.084541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.768 [2024-07-24 20:04:26.084548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.768 [2024-07-24 20:04:26.084557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.768 [2024-07-24 20:04:26.084564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.768 [2024-07-24 20:04:26.084573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.768 [2024-07-24 20:04:26.084580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.768 [2024-07-24 20:04:26.084589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.768 [2024-07-24 20:04:26.084596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.768 [2024-07-24 20:04:26.084605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.768 [2024-07-24 20:04:26.084612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.768 [2024-07-24 20:04:26.084621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.768 [2024-07-24 20:04:26.084628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.768 [2024-07-24 20:04:26.084637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.768 [2024-07-24 20:04:26.084644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.768 [2024-07-24 20:04:26.084653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.768 [2024-07-24 20:04:26.084660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.768 [2024-07-24 20:04:26.084669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.768 [2024-07-24 20:04:26.084677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.768 [2024-07-24 20:04:26.084687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.768 [2024-07-24 20:04:26.084694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.768 [2024-07-24 20:04:26.084703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.768 [2024-07-24 20:04:26.084710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.768 [2024-07-24 20:04:26.084719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.768 [2024-07-24 20:04:26.084726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.768 [2024-07-24 20:04:26.084735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.768 [2024-07-24 20:04:26.084742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.768 [2024-07-24 20:04:26.084751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.768 [2024-07-24 20:04:26.084758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.768 [2024-07-24 20:04:26.084768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.768 [2024-07-24 20:04:26.084775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.768 [2024-07-24 20:04:26.084784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.768 [2024-07-24 20:04:26.084791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.768 [2024-07-24 20:04:26.084800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:101080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.768 [2024-07-24 20:04:26.084806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.768 [2024-07-24 20:04:26.084816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.768 [2024-07-24 20:04:26.084823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.768 [2024-07-24 20:04:26.084832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.768 [2024-07-24 20:04:26.084839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.768 [2024-07-24 20:04:26.084848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.768 [2024-07-24 20:04:26.084854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.768 [2024-07-24 20:04:26.084864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.768 [2024-07-24 20:04:26.084871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.768 [2024-07-24 20:04:26.084883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.768 [2024-07-24 20:04:26.084891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.768 [2024-07-24 20:04:26.084900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:101128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.768 [2024-07-24 20:04:26.084908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.768 [2024-07-24 20:04:26.084918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.768 [2024-07-24 20:04:26.084925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.768 [2024-07-24 20:04:26.084947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:52.768 [2024-07-24 20:04:26.084955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:52.768 [2024-07-24 20:04:26.084964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101144 len:8 PRP1 0x0 PRP2 0x0 00:24:52.768 [2024-07-24 20:04:26.084972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.768 [2024-07-24 20:04:26.085010] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18352c0 was disconnected and freed. reset controller. 00:24:52.768 [2024-07-24 20:04:26.085022] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:52.768 [2024-07-24 20:04:26.085041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:52.768 [2024-07-24 20:04:26.085051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.768 [2024-07-24 20:04:26.085059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:52.768 [2024-07-24 20:04:26.085066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.768 [2024-07-24 20:04:26.085075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:52.768 [2024-07-24 20:04:26.085082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.768 [2024-07-24 20:04:26.085090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:52.768 [2024-07-24 20:04:26.085097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.768 [2024-07-24 20:04:26.085104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.769 [2024-07-24 20:04:26.085143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1838ef0 (9): Bad file descriptor 00:24:52.769 [2024-07-24 20:04:26.088709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.769 [2024-07-24 20:04:26.256183] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:52.769 [2024-07-24 20:04:29.523723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.769 [2024-07-24 20:04:29.523761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.769 [2024-07-24 20:04:29.523776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.769 [2024-07-24 20:04:29.523791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.769 [2024-07-24 20:04:29.523801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.769 [2024-07-24 20:04:29.523809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.769 [2024-07-24 20:04:29.523818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.769 [2024-07-24 20:04:29.523825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.769 [2024-07-24 20:04:29.523835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:65488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.769 [2024-07-24 20:04:29.523842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.769 [2024-07-24 20:04:29.523851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:65496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.769 [2024-07-24 20:04:29.523859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.769 [2024-07-24 20:04:29.523868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:65504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.769 [2024-07-24 20:04:29.523875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.769 [2024-07-24 20:04:29.523884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.769 [2024-07-24 20:04:29.523892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.769 [2024-07-24 20:04:29.523901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:65520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.769 [2024-07-24 20:04:29.523908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.769 [2024-07-24 20:04:29.523917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:65528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.769 [2024-07-24 20:04:29.523924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.769 [2024-07-24 20:04:29.523934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:65536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.769 [2024-07-24 20:04:29.523941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.769 [2024-07-24 20:04:29.523951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:65544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.769 [2024-07-24 20:04:29.523958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.769 [2024-07-24 20:04:29.523967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.769 [2024-07-24 20:04:29.523975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.769 [2024-07-24 20:04:29.523984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.769 [2024-07-24 20:04:29.523991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.769 [2024-07-24 20:04:29.524002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.769 [2024-07-24 20:04:29.524009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.769 [2024-07-24 20:04:29.524018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.769 [2024-07-24 20:04:29.524025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.769 [2024-07-24 20:04:29.524034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.769 [2024-07-24 20:04:29.524041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.769 [2024-07-24 20:04:29.524051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.769 [2024-07-24 20:04:29.524058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.769 [2024-07-24 20:04:29.524068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.769 [2024-07-24 20:04:29.524075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.769 [2024-07-24 20:04:29.524084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.769 [2024-07-24 20:04:29.524091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.769 [2024-07-24 20:04:29.524100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.769 [2024-07-24 20:04:29.524107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.769 [2024-07-24 20:04:29.524116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.769 [2024-07-24 20:04:29.524123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.769 [2024-07-24 20:04:29.524132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.769 [2024-07-24 20:04:29.524139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.769 [2024-07-24 20:04:29.524148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.769 [2024-07-24 20:04:29.524155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.769 [2024-07-24 20:04:29.524164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:66232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.769 [2024-07-24 20:04:29.524171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.769 [2024-07-24 20:04:29.524181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.769 [2024-07-24 20:04:29.524188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.769 [2024-07-24 20:04:29.524197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.769 [2024-07-24 20:04:29.524210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.769 [2024-07-24 20:04:29.524219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.769 [2024-07-24 20:04:29.524226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.769 [2024-07-24 20:04:29.524235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.769 [2024-07-24 20:04:29.524243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.769 [2024-07-24 20:04:29.524252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:65552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.769 [2024-07-24 20:04:29.524258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.769 [2024-07-24 20:04:29.524267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:65560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.769 [2024-07-24 20:04:29.524275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.770 [2024-07-24 20:04:29.524284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:65568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.770 [2024-07-24 20:04:29.524291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.770 [2024-07-24 20:04:29.524300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.770 [2024-07-24 20:04:29.524308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.770 [2024-07-24 20:04:29.524317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:65584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.770 [2024-07-24 20:04:29.524324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.770 [2024-07-24 20:04:29.524334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:65592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.770 [2024-07-24 20:04:29.524341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.770 [2024-07-24 20:04:29.524350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:65600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.770 [2024-07-24 20:04:29.524357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.770 [2024-07-24 20:04:29.524366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:65608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.770 [2024-07-24 20:04:29.524374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.770 [2024-07-24 20:04:29.524383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.770 [2024-07-24 20:04:29.524391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.770 [2024-07-24 20:04:29.524401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.770 [2024-07-24 20:04:29.524410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.770 [2024-07-24 20:04:29.524420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.770 [2024-07-24 20:04:29.524434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.770 [2024-07-24 20:04:29.524443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.770 [2024-07-24 20:04:29.524450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.770 [2024-07-24 20:04:29.524459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.770 [2024-07-24 20:04:29.524467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.770 [2024-07-24 20:04:29.524478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.770 [2024-07-24 20:04:29.524486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.770 [2024-07-24 20:04:29.524496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.770 [2024-07-24 20:04:29.524505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.770 [2024-07-24 20:04:29.524516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:65616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.770 [2024-07-24 20:04:29.524523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.770 [2024-07-24 20:04:29.524534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:65624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.770 [2024-07-24 20:04:29.524542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.770 [2024-07-24 20:04:29.524552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:65632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.770 [2024-07-24 20:04:29.524559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.770 [2024-07-24 20:04:29.524569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:65640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.770 [2024-07-24 20:04:29.524576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.770 [2024-07-24 20:04:29.524586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:65648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.770 [2024-07-24 20:04:29.524593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.770 [2024-07-24 20:04:29.524603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:65656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.770 [2024-07-24 20:04:29.524610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.770 [2024-07-24 20:04:29.524619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.770 [2024-07-24 20:04:29.524626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.770 [2024-07-24 20:04:29.524635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.770 [2024-07-24 20:04:29.524642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.770 [2024-07-24 20:04:29.524653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.770 [2024-07-24 20:04:29.524660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.770 [2024-07-24 20:04:29.524669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:65688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.770 [2024-07-24 20:04:29.524676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.770 [2024-07-24 20:04:29.524685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:65696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.770 [2024-07-24 20:04:29.524692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.770 [2024-07-24 20:04:29.524701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.770 [2024-07-24 20:04:29.524709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.770 [2024-07-24 20:04:29.524718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.770 [2024-07-24 20:04:29.524725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.770 [2024-07-24 20:04:29.524734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.770 [2024-07-24 20:04:29.524742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.770 [2024-07-24 20:04:29.524750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.770 [2024-07-24 20:04:29.524757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.770 [2024-07-24 20:04:29.524766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:65736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.770 [2024-07-24 20:04:29.524773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.770 [2024-07-24 20:04:29.524783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:66328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.770 [2024-07-24 20:04:29.524789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.770 [2024-07-24 20:04:29.524799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.770 [2024-07-24 20:04:29.524805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.770 [2024-07-24 20:04:29.524814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.770 [2024-07-24 20:04:29.524821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.770 [2024-07-24 20:04:29.524831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.770 [2024-07-24 20:04:29.524838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.770 [2024-07-24 20:04:29.524847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.770 [2024-07-24 20:04:29.524856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.770 [2024-07-24 20:04:29.524865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.771 [2024-07-24 20:04:29.524872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.771 [2024-07-24 20:04:29.524882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.771 [2024-07-24 20:04:29.524889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.771 [2024-07-24 20:04:29.524898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.771 [2024-07-24 20:04:29.524905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.771 [2024-07-24 20:04:29.524914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.771 [2024-07-24 20:04:29.524922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.771 [2024-07-24 20:04:29.524932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:65752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.771 [2024-07-24 20:04:29.524939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.771 [2024-07-24 20:04:29.524948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:65760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.771 [2024-07-24 20:04:29.524955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.771 [2024-07-24 20:04:29.524964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.771 [2024-07-24 20:04:29.524972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.771 [2024-07-24 20:04:29.524982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.771 [2024-07-24 20:04:29.524989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.771 [2024-07-24 20:04:29.524998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.771 [2024-07-24 20:04:29.525006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.771 [2024-07-24 20:04:29.525015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.771 [2024-07-24 20:04:29.525022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.771 [2024-07-24 20:04:29.525032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:65800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.771 [2024-07-24 20:04:29.525038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.771 [2024-07-24 20:04:29.525047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.771 [2024-07-24 20:04:29.525054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.771 [2024-07-24 20:04:29.525065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.771 [2024-07-24 20:04:29.525072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.771 [2024-07-24 20:04:29.525081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.771 [2024-07-24 20:04:29.525089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.771 [2024-07-24 20:04:29.525098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.771 [2024-07-24 20:04:29.525105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.771 [2024-07-24 20:04:29.525114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.771 [2024-07-24 20:04:29.525122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.771 [2024-07-24 20:04:29.525131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.771 [2024-07-24 20:04:29.525138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.771 [2024-07-24 20:04:29.525147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.771 [2024-07-24 20:04:29.525154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.771 [2024-07-24 20:04:29.525163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.771 [2024-07-24 20:04:29.525171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.771 [2024-07-24 20:04:29.525180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.771 [2024-07-24 20:04:29.525187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.771 [2024-07-24 20:04:29.525196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:65824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.771 [2024-07-24 20:04:29.525206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.771 [2024-07-24 20:04:29.525215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.771 [2024-07-24 20:04:29.525222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.771 [2024-07-24 20:04:29.525232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.771 [2024-07-24 20:04:29.525239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.771 [2024-07-24 20:04:29.525249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:65848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.771 [2024-07-24 20:04:29.525256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.771 [2024-07-24 20:04:29.525264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.771 [2024-07-24 20:04:29.525271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.771 [2024-07-24 20:04:29.525282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.771 [2024-07-24 20:04:29.525290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.771 [2024-07-24 20:04:29.525299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.771 [2024-07-24 20:04:29.525306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.771 [2024-07-24 20:04:29.525315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.771 [2024-07-24 20:04:29.525321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.771 [2024-07-24 20:04:29.525331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.771 [2024-07-24 20:04:29.525338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.771 [2024-07-24 20:04:29.525348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.771 [2024-07-24 20:04:29.525355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.771 [2024-07-24 20:04:29.525364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:65856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.771 [2024-07-24 20:04:29.525371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.771 [2024-07-24 20:04:29.525381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.771 [2024-07-24 20:04:29.525388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.771 [2024-07-24 20:04:29.525398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:65872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.771 [2024-07-24 20:04:29.525405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.771 [2024-07-24 20:04:29.525414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:65880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.771 [2024-07-24 20:04:29.525421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.771 [2024-07-24 20:04:29.525431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:65888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.771 [2024-07-24 20:04:29.525438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.771 [2024-07-24 20:04:29.525447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.771 [2024-07-24 20:04:29.525455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.771 [2024-07-24 20:04:29.525464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:65904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.771 [2024-07-24 20:04:29.525470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.771 [2024-07-24 20:04:29.525480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:65912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.771 [2024-07-24 20:04:29.525489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.771 [2024-07-24 20:04:29.525499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:65920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.772 [2024-07-24 20:04:29.525505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.772 [2024-07-24 20:04:29.525514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:65928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.772 [2024-07-24 20:04:29.525522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.772 [2024-07-24 20:04:29.525532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.772 [2024-07-24 20:04:29.525539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.772 [2024-07-24 20:04:29.525548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:65944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.772 [2024-07-24 20:04:29.525555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.772 [2024-07-24 20:04:29.525564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.772 [2024-07-24 20:04:29.525571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.772 [2024-07-24 20:04:29.525581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.772 [2024-07-24 20:04:29.525588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.772 [2024-07-24 20:04:29.525597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:65968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.772 [2024-07-24 20:04:29.525604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.772 [2024-07-24 20:04:29.525613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:65976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.772 [2024-07-24 20:04:29.525621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.772 [2024-07-24 20:04:29.525631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.772 [2024-07-24 20:04:29.525638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.772 [2024-07-24 20:04:29.525648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:65992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.772 [2024-07-24 20:04:29.525655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.772 [2024-07-24 20:04:29.525663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.772 [2024-07-24 20:04:29.525670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.772 [2024-07-24 20:04:29.525680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:66008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.772 [2024-07-24 20:04:29.525687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.772 [2024-07-24 20:04:29.525698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.772 [2024-07-24 20:04:29.525705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.772 [2024-07-24 20:04:29.525714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.772 [2024-07-24 20:04:29.525721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.772 [2024-07-24 20:04:29.525730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:66032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.772 [2024-07-24 20:04:29.525738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.772 [2024-07-24 20:04:29.525746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.772 [2024-07-24 20:04:29.525753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.772 [2024-07-24 20:04:29.525762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.772 [2024-07-24 20:04:29.525769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.772 [2024-07-24 20:04:29.525779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:66040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.772 [2024-07-24 20:04:29.525786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.772 [2024-07-24 20:04:29.525795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.772 [2024-07-24 20:04:29.525802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.772 [2024-07-24 20:04:29.525811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:66056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.772 [2024-07-24 20:04:29.525818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.772 [2024-07-24 20:04:29.525828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:66064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.772 [2024-07-24 20:04:29.525835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.772 [2024-07-24 20:04:29.525844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:66072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.772 [2024-07-24 20:04:29.525850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.772 [2024-07-24 20:04:29.525859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:66080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.772 [2024-07-24 20:04:29.525866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.772 [2024-07-24 20:04:29.525876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:66088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.772 [2024-07-24 20:04:29.525883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.772 [2024-07-24 20:04:29.525903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:52.772 [2024-07-24 20:04:29.525909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:52.772 [2024-07-24 20:04:29.525919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66096 len:8 PRP1 0x0 PRP2 0x0 00:24:52.772 [2024-07-24 20:04:29.525926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.772 [2024-07-24 20:04:29.525961] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1867c80 was disconnected and freed. reset controller. 00:24:52.772 [2024-07-24 20:04:29.525971] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:52.772 [2024-07-24 20:04:29.525991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:52.772 [2024-07-24 20:04:29.525999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.772 [2024-07-24 20:04:29.526007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:52.772 [2024-07-24 20:04:29.526014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.772 [2024-07-24 20:04:29.526023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:52.772 [2024-07-24 20:04:29.526030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.772 [2024-07-24 20:04:29.526038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:52.772 [2024-07-24 20:04:29.526045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.772 [2024-07-24 20:04:29.526052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.772 [2024-07-24 20:04:29.529613] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.772 [2024-07-24 20:04:29.529638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1838ef0 (9): Bad file descriptor 00:24:52.772 [2024-07-24 20:04:29.566849] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:52.772 [2024-07-24 20:04:33.875454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:52.772 [2024-07-24 20:04:33.875492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.772 [2024-07-24 20:04:33.875504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:52.772 [2024-07-24 20:04:33.875512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.772 [2024-07-24 20:04:33.875520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:52.772 [2024-07-24 20:04:33.875528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.772 [2024-07-24 20:04:33.875536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:52.772 [2024-07-24 20:04:33.875543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.772 [2024-07-24 20:04:33.875550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1838ef0 is same with the state(5) to be set 00:24:52.772 [2024-07-24 20:04:33.875602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:112968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.772 [2024-07-24 20:04:33.875617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.772 [2024-07-24 20:04:33.875631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:112976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.772 [2024-07-24 20:04:33.875638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.772 [2024-07-24 20:04:33.875648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:112984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.772 [2024-07-24 20:04:33.875655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.772 [2024-07-24 20:04:33.875665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:112992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.773 [2024-07-24 20:04:33.875672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.773 [2024-07-24 20:04:33.875681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:113000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.773 [2024-07-24 20:04:33.875688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.773 [2024-07-24 20:04:33.875697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.773 [2024-07-24 20:04:33.875705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.773 [2024-07-24 20:04:33.875714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:113016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.773 [2024-07-24 20:04:33.875721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.773 [2024-07-24 20:04:33.875730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:113024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.773 [2024-07-24 20:04:33.875737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.773 [2024-07-24 20:04:33.875747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.773 [2024-07-24 20:04:33.875754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.773 [2024-07-24 20:04:33.875763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:113040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.773 [2024-07-24 20:04:33.875770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.773 [2024-07-24 20:04:33.875779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.773 [2024-07-24 20:04:33.875786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.773 [2024-07-24 20:04:33.875796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:113056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.773 [2024-07-24 20:04:33.875803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.773 [2024-07-24 20:04:33.875814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:113064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.773 [2024-07-24 20:04:33.875822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.773 [2024-07-24 20:04:33.875833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:113072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.773 [2024-07-24 20:04:33.875843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.773 [2024-07-24 20:04:33.875853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:113080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.773 [2024-07-24 20:04:33.875860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.773 [2024-07-24 20:04:33.875869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:113088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.773 [2024-07-24 20:04:33.875877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.773 [2024-07-24 20:04:33.875887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:113096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.773 [2024-07-24 20:04:33.875895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.773 [2024-07-24 20:04:33.875905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:113104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.773 [2024-07-24 20:04:33.875912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.773 [2024-07-24 20:04:33.875924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.773 [2024-07-24 20:04:33.875932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.773 [2024-07-24 20:04:33.875942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:113120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.773 [2024-07-24 20:04:33.875950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.773 [2024-07-24 20:04:33.875961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.773 [2024-07-24 20:04:33.875969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.773 [2024-07-24 20:04:33.875978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:113136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.773 [2024-07-24 20:04:33.875985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.773 [2024-07-24 20:04:33.875994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.773 [2024-07-24 20:04:33.876001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.773 [2024-07-24 20:04:33.876011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.773 [2024-07-24 20:04:33.876018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.773 [2024-07-24 20:04:33.876027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.773 [2024-07-24 20:04:33.876034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.773 [2024-07-24 20:04:33.876042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:113168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.773 [2024-07-24 20:04:33.876049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.773 [2024-07-24 20:04:33.876060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:113176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.773 [2024-07-24 20:04:33.876067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.773 [2024-07-24 20:04:33.876076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.773 [2024-07-24 20:04:33.876083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.773 [2024-07-24 20:04:33.876092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.773 [2024-07-24 20:04:33.876100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.773 [2024-07-24 20:04:33.876110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:113200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.773 [2024-07-24 20:04:33.876118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.773 [2024-07-24 20:04:33.876127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:113208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.773 [2024-07-24 20:04:33.876134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.773 [2024-07-24 20:04:33.876144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:113216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.773 [2024-07-24 20:04:33.876151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.773 [2024-07-24 20:04:33.876161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:113224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.773 [2024-07-24 20:04:33.876168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.773 [2024-07-24 20:04:33.876177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.773 [2024-07-24 20:04:33.876185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.773 [2024-07-24 20:04:33.876194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.773 [2024-07-24 20:04:33.876206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.773 [2024-07-24 20:04:33.876216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.773 [2024-07-24 20:04:33.876223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.773 [2024-07-24 20:04:33.876232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.773 [2024-07-24 20:04:33.876240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.773 [2024-07-24 20:04:33.876250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:113264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.773 [2024-07-24 20:04:33.876257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.773 [2024-07-24 20:04:33.876266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:113272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.773 [2024-07-24 20:04:33.876278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.773 [2024-07-24 20:04:33.876287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:113280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.774 [2024-07-24 20:04:33.876294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.774 [2024-07-24 20:04:33.876304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.774 [2024-07-24 20:04:33.876311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.774 [2024-07-24 20:04:33.876320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:113296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.774 [2024-07-24 20:04:33.876327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.774 [2024-07-24 20:04:33.876335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.774 [2024-07-24 20:04:33.876343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.774 [2024-07-24 20:04:33.876352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:113312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.774 [2024-07-24 20:04:33.876359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.774 [2024-07-24 20:04:33.876368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:113320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.774 [2024-07-24 20:04:33.876375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.774 [2024-07-24 20:04:33.876384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:113328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.774 [2024-07-24 20:04:33.876392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.774 [2024-07-24 20:04:33.876401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.774 [2024-07-24 20:04:33.876408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.774 [2024-07-24 20:04:33.876417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:113344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.774 [2024-07-24 20:04:33.876424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.774 [2024-07-24 20:04:33.876433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.774 [2024-07-24 20:04:33.876440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.774 [2024-07-24 20:04:33.876450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:113360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.774 [2024-07-24 20:04:33.876457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.774 [2024-07-24 20:04:33.876466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:113368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.774 [2024-07-24 20:04:33.876472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.774 [2024-07-24 20:04:33.876482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:113376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.774 [2024-07-24 20:04:33.876490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.774 [2024-07-24 20:04:33.876499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:113384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.774 [2024-07-24 20:04:33.876507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.774 [2024-07-24 20:04:33.876516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.774 [2024-07-24 20:04:33.876523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.774 [2024-07-24 20:04:33.876532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:113400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.774 [2024-07-24 20:04:33.876539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.774 [2024-07-24 20:04:33.876548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:113408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.774 [2024-07-24 20:04:33.876555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.774 [2024-07-24 20:04:33.876565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.774 [2024-07-24 20:04:33.876571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.774 [2024-07-24 20:04:33.876580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.774 [2024-07-24 20:04:33.876588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.774 [2024-07-24 20:04:33.876597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.774 [2024-07-24 20:04:33.876604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.774 [2024-07-24 20:04:33.876613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.774 [2024-07-24 20:04:33.876620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.774 [2024-07-24 20:04:33.876629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.774 [2024-07-24 20:04:33.876637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.774 [2024-07-24 20:04:33.876647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.774 [2024-07-24 20:04:33.876654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.774 [2024-07-24 20:04:33.876664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:113464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.774 [2024-07-24 20:04:33.876671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.774 [2024-07-24 20:04:33.876680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:112720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.774 [2024-07-24 20:04:33.876688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.774 [2024-07-24 20:04:33.876698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:112728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.774 [2024-07-24 20:04:33.876705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.774 [2024-07-24 20:04:33.876715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:112736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.774 [2024-07-24 20:04:33.876722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.774 [2024-07-24 20:04:33.876730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.774 [2024-07-24 20:04:33.876738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.774 [2024-07-24 20:04:33.876747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:112752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.774 [2024-07-24 20:04:33.876754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.774 [2024-07-24 20:04:33.876763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:112760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.774 [2024-07-24 20:04:33.876770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.774 [2024-07-24 20:04:33.876779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.774 [2024-07-24 20:04:33.876787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.774 [2024-07-24 20:04:33.876795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:113472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.774 [2024-07-24 20:04:33.876803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.774 [2024-07-24 20:04:33.876811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:113480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.774 [2024-07-24 20:04:33.876818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.775 [2024-07-24 20:04:33.876827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:113488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.775 [2024-07-24 20:04:33.876834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.775 [2024-07-24 20:04:33.876844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:113496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.775 [2024-07-24 20:04:33.876850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.775 [2024-07-24 20:04:33.876859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:113504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.775 [2024-07-24 20:04:33.876866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.775 [2024-07-24 20:04:33.876875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.775 [2024-07-24 20:04:33.876882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.775 [2024-07-24 20:04:33.876892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.775 [2024-07-24 20:04:33.876900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.775 [2024-07-24 20:04:33.876910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:113528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.775 [2024-07-24 20:04:33.876916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.775 [2024-07-24 20:04:33.876925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.775 [2024-07-24 20:04:33.876932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.775 [2024-07-24 20:04:33.876942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:112776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.775 [2024-07-24 20:04:33.876949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.775 [2024-07-24 20:04:33.876958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:112784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.775 [2024-07-24 20:04:33.876965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.775 [2024-07-24 20:04:33.876974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:112792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.775 [2024-07-24 20:04:33.876981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.775 [2024-07-24 20:04:33.876991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:112800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.775 [2024-07-24 20:04:33.876998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.775 [2024-07-24 20:04:33.877006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.775 [2024-07-24 20:04:33.877013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.775 [2024-07-24 20:04:33.877022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:112816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.775 [2024-07-24 20:04:33.877030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.775 [2024-07-24 20:04:33.877039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:112824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.775 [2024-07-24 20:04:33.877046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.775 [2024-07-24 20:04:33.877055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:112832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.775 [2024-07-24 20:04:33.877061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.775 [2024-07-24 20:04:33.877070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:112840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.775 [2024-07-24 20:04:33.877078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.775 [2024-07-24 20:04:33.877087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.775 [2024-07-24 20:04:33.877094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.775 [2024-07-24 20:04:33.877104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:112856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.775 [2024-07-24 20:04:33.877111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.775 [2024-07-24 20:04:33.877121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:112864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.775 [2024-07-24 20:04:33.877129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.775 [2024-07-24 20:04:33.877138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:112872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.775 [2024-07-24 20:04:33.877145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.775 [2024-07-24 20:04:33.877154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:112880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.775 [2024-07-24 20:04:33.877161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.775 [2024-07-24 20:04:33.877171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:112888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.775 [2024-07-24 20:04:33.877178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.775 [2024-07-24 20:04:33.877187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:112896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.775 [2024-07-24 20:04:33.877194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.775 [2024-07-24 20:04:33.877208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:112904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.775 [2024-07-24 20:04:33.877216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.775 [2024-07-24 20:04:33.877225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.775 [2024-07-24 20:04:33.877233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.775 [2024-07-24 20:04:33.877242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.775 [2024-07-24 20:04:33.877249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.775 [2024-07-24 20:04:33.877258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.775 [2024-07-24 20:04:33.877265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.775 [2024-07-24 20:04:33.877274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.775 [2024-07-24 20:04:33.877282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.775 [2024-07-24 20:04:33.877291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.775 [2024-07-24 20:04:33.877298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.775 [2024-07-24 20:04:33.877307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:112952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.775 [2024-07-24 20:04:33.877316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.775 [2024-07-24 20:04:33.877326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:112960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.775 [2024-07-24 20:04:33.877333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.775 [2024-07-24 20:04:33.877342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.775 [2024-07-24 20:04:33.877349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.775 [2024-07-24 20:04:33.877358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:113552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.775 [2024-07-24 20:04:33.877365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.775 [2024-07-24 20:04:33.877374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.775 [2024-07-24 20:04:33.877381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.775 [2024-07-24 20:04:33.877391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:113568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.775 [2024-07-24 20:04:33.877397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.775 [2024-07-24 20:04:33.877406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:113576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.775 [2024-07-24 20:04:33.877413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.775 [2024-07-24 20:04:33.877423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:113584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.775 [2024-07-24 20:04:33.877430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.775 [2024-07-24 20:04:33.877440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:113592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.775 [2024-07-24 20:04:33.877447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.775 [2024-07-24 20:04:33.877456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:113600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.776 [2024-07-24 20:04:33.877462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.776 [2024-07-24 20:04:33.877472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:113608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.776 [2024-07-24 20:04:33.877479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.776 [2024-07-24 20:04:33.877488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:113616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.776 [2024-07-24 20:04:33.877495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.776 [2024-07-24 20:04:33.877504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:113624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.776 [2024-07-24 20:04:33.877511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.776 [2024-07-24 20:04:33.877522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:113632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.776 [2024-07-24 20:04:33.877530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.776 [2024-07-24 20:04:33.877538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:113640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.776 [2024-07-24 20:04:33.877545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.776 [2024-07-24 20:04:33.877554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:113648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.776 [2024-07-24 20:04:33.877561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.776 [2024-07-24 20:04:33.877570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:113656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.776 [2024-07-24 20:04:33.877577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.776 [2024-07-24 20:04:33.877586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.776 [2024-07-24 20:04:33.877593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.776 [2024-07-24 20:04:33.877602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:113672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.776 [2024-07-24 20:04:33.877609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.776 [2024-07-24 20:04:33.877618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:113680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.776 [2024-07-24 20:04:33.877626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.776 [2024-07-24 20:04:33.877635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.776 [2024-07-24 20:04:33.877642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.776 [2024-07-24 20:04:33.877651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:113696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.776 [2024-07-24 20:04:33.877658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.776 [2024-07-24 20:04:33.877667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.776 [2024-07-24 20:04:33.877674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.776 [2024-07-24 20:04:33.877683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.776 [2024-07-24 20:04:33.877691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.776 [2024-07-24 20:04:33.877700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:113720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.776 [2024-07-24 20:04:33.877707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.776 [2024-07-24 20:04:33.877717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:113728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.776 [2024-07-24 20:04:33.877725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.776 [2024-07-24 20:04:33.877742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:52.776 [2024-07-24 20:04:33.877749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:52.776 [2024-07-24 20:04:33.877756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113736 len:8 PRP1 0x0 PRP2 0x0 00:24:52.776 [2024-07-24 20:04:33.877764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.776 [2024-07-24 20:04:33.877800] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18678b0 was disconnected and freed. reset controller. 00:24:52.776 [2024-07-24 20:04:33.877810] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:52.776 [2024-07-24 20:04:33.877819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.776 [2024-07-24 20:04:33.881339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.776 [2024-07-24 20:04:33.881364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1838ef0 (9): Bad file descriptor 00:24:52.776 [2024-07-24 20:04:33.928662] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:52.776 00:24:52.776 Latency(us) 00:24:52.776 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:52.776 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:52.776 Verification LBA range: start 0x0 length 0x4000 00:24:52.776 NVMe0n1 : 15.01 11865.41 46.35 615.06 0.00 10227.88 1044.48 16056.32 00:24:52.776 =================================================================================================================== 00:24:52.776 Total : 11865.41 46.35 615.06 0.00 10227.88 1044.48 16056.32 00:24:52.776 Received shutdown signal, test time was about 15.000000 seconds 00:24:52.776 00:24:52.776 Latency(us) 00:24:52.776 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:52.776 =================================================================================================================== 00:24:52.776 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:52.776 20:04:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:52.776 20:04:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:52.776 20:04:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:52.776 20:04:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3789359 00:24:52.776 20:04:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3789359 /var/tmp/bdevperf.sock 00:24:52.776 20:04:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:52.776 20:04:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3789359 ']' 00:24:52.776 20:04:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:52.776 20:04:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:52.776 20:04:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:52.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:52.776 20:04:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:52.776 20:04:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:53.349 20:04:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:53.349 20:04:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:53.349 20:04:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:53.349 [2024-07-24 20:04:41.235503] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:53.349 20:04:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:53.610 [2024-07-24 20:04:41.395888] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:53.610 20:04:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:53.871 NVMe0n1 00:24:53.871 20:04:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:54.132 00:24:54.392 20:04:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:54.392 00:24:54.392 20:04:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:54.392 20:04:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:54.653 20:04:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:54.914 20:04:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:58.216 20:04:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:58.216 20:04:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:58.216 20:04:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3790377 00:24:58.216 20:04:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3790377 00:24:58.216 20:04:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:59.195 0 00:24:59.195 20:04:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:59.195 [2024-07-24 20:04:40.322472] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:24:59.195 [2024-07-24 20:04:40.322529] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3789359 ] 00:24:59.195 EAL: No free 2048 kB hugepages reported on node 1 00:24:59.195 [2024-07-24 20:04:40.381007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.195 [2024-07-24 20:04:40.443154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.195 [2024-07-24 20:04:42.637565] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:59.195 [2024-07-24 20:04:42.637609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.195 [2024-07-24 20:04:42.637620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.195 [2024-07-24 20:04:42.637629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.195 [2024-07-24 20:04:42.637637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.195 [2024-07-24 20:04:42.637645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.195 [2024-07-24 20:04:42.637652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.195 [2024-07-24 20:04:42.637659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.195 [2024-07-24 20:04:42.637666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.195 [2024-07-24 20:04:42.637673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.195 [2024-07-24 20:04:42.637700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.195 [2024-07-24 20:04:42.637714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f1eef0 (9): Bad file descriptor 00:24:59.195 [2024-07-24 20:04:42.729437] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:59.195 Running I/O for 1 seconds... 00:24:59.195 00:24:59.195 Latency(us) 00:24:59.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:59.195 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:59.195 Verification LBA range: start 0x0 length 0x4000 00:24:59.195 NVMe0n1 : 1.05 10771.51 42.08 0.00 0.00 11374.97 2744.32 45219.84 00:24:59.195 =================================================================================================================== 00:24:59.195 Total : 10771.51 42.08 0.00 0.00 11374.97 2744.32 45219.84 00:24:59.195 20:04:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:59.195 20:04:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:59.456 20:04:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:59.456 20:04:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:59.456 20:04:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:59.717 20:04:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:59.978 20:04:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:03.280 20:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:03.280 20:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:03.280 20:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3789359 00:25:03.280 20:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3789359 ']' 00:25:03.280 20:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3789359 00:25:03.280 20:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:03.280 20:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:03.280 20:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3789359 00:25:03.280 20:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:03.280 20:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:03.280 20:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3789359' 00:25:03.280 killing process with pid 3789359 00:25:03.280 20:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3789359 00:25:03.280 20:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3789359 00:25:03.280 20:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:03.280 20:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:03.280 20:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:03.280 20:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:03.280 20:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:03.280 20:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:03.280 20:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:25:03.280 20:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:03.280 20:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:25:03.280 20:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:03.280 20:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:03.280 rmmod nvme_tcp 00:25:03.541 rmmod nvme_fabrics 00:25:03.541 rmmod nvme_keyring 00:25:03.541 20:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:03.541 20:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:25:03.541 20:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:25:03.541 20:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3785640 ']' 00:25:03.541 20:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3785640 00:25:03.541 20:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3785640 ']' 00:25:03.541 20:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3785640 00:25:03.541 20:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:03.541 20:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:03.541 20:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3785640 00:25:03.541 20:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:03.541 20:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:03.541 20:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3785640' 00:25:03.541 killing process with pid 3785640 00:25:03.541 20:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3785640 00:25:03.542 20:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3785640 00:25:03.542 20:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:03.542 20:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:03.542 20:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:03.542 20:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:03.542 20:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:03.542 20:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.542 20:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:03.542 20:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.090 20:04:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:06.090 00:25:06.090 real 0m39.309s 00:25:06.090 user 2m1.570s 00:25:06.090 sys 0m7.979s 00:25:06.090 20:04:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:06.090 20:04:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:06.090 ************************************ 00:25:06.090 END TEST nvmf_failover 00:25:06.090 ************************************ 00:25:06.090 20:04:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:06.090 20:04:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:06.090 20:04:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:06.090 20:04:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.090 ************************************ 00:25:06.090 START TEST nvmf_host_discovery 00:25:06.090 ************************************ 00:25:06.090 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:06.090 * Looking for test storage... 00:25:06.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:06.090 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:06.090 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:06.090 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:06.090 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:06.090 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:06.090 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:06.090 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:06.090 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:06.090 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:06.090 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:06.090 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:06.090 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:06.090 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:06.090 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:06.090 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:06.090 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:06.090 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:06.090 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:06.090 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:06.090 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:06.090 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:06.090 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:06.090 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.090 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.090 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.090 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:06.091 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.091 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:25:06.091 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:06.091 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:06.091 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:06.091 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:06.091 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:06.091 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:06.091 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:06.091 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:06.091 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:06.091 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:06.091 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:06.091 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:06.091 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:06.091 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:06.091 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:06.091 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:06.091 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:06.091 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:06.091 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:06.091 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:06.091 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.091 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:06.091 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.091 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:06.091 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:06.091 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:25:06.091 20:04:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:12.682 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:12.682 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:12.682 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:12.682 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:12.682 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:12.945 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:12.945 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:12.945 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:12.945 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:12.945 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:12.945 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:12.945 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:12.945 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:12.945 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:25:12.945 00:25:12.945 --- 10.0.0.2 ping statistics --- 00:25:12.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:12.945 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:25:12.945 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:12.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:12.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.379 ms 00:25:12.945 00:25:12.945 --- 10.0.0.1 ping statistics --- 00:25:12.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:12.945 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:25:12.945 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:12.945 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:25:12.945 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:12.945 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:12.945 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:12.945 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:12.945 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:12.945 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:12.945 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:12.945 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:12.945 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:12.945 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:12.945 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.207 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3795614 00:25:13.207 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3795614 00:25:13.207 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:13.207 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 3795614 ']' 00:25:13.207 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.207 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:13.207 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.207 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:13.207 20:05:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.207 [2024-07-24 20:05:00.957628] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:25:13.207 [2024-07-24 20:05:00.957691] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:13.207 EAL: No free 2048 kB hugepages reported on node 1 00:25:13.207 [2024-07-24 20:05:01.045288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.207 [2024-07-24 20:05:01.138059] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:13.207 [2024-07-24 20:05:01.138120] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:13.207 [2024-07-24 20:05:01.138129] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:13.207 [2024-07-24 20:05:01.138135] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:13.207 [2024-07-24 20:05:01.138141] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:13.207 [2024-07-24 20:05:01.138183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:14.149 20:05:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:14.149 20:05:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:25:14.149 20:05:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:14.149 20:05:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:14.149 20:05:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:14.149 20:05:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:14.149 20:05:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:14.149 20:05:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.149 20:05:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:14.149 [2024-07-24 20:05:01.797540] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:14.149 20:05:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.149 20:05:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:14.149 20:05:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.149 20:05:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:14.149 [2024-07-24 20:05:01.809824] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:14.149 20:05:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.149 20:05:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:14.149 20:05:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.150 20:05:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:14.150 null0 00:25:14.150 20:05:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.150 20:05:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:14.150 20:05:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.150 20:05:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:14.150 null1 00:25:14.150 20:05:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.150 20:05:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:14.150 20:05:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.150 20:05:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:14.150 20:05:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.150 20:05:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3795735 00:25:14.150 20:05:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3795735 /tmp/host.sock 00:25:14.150 20:05:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:14.150 20:05:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 3795735 ']' 00:25:14.150 20:05:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:25:14.150 20:05:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:14.150 20:05:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:14.150 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:14.150 20:05:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:14.150 20:05:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:14.150 [2024-07-24 20:05:01.903733] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:25:14.150 [2024-07-24 20:05:01.903797] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3795735 ] 00:25:14.150 EAL: No free 2048 kB hugepages reported on node 1 00:25:14.150 [2024-07-24 20:05:01.967758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.150 [2024-07-24 20:05:02.041562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:15.093 20:05:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.093 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:15.094 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:15.094 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.094 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.094 [2024-07-24 20:05:03.036901] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:15.094 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.094 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:15.355 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:15.355 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:15.355 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.355 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:15.355 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.355 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:15.355 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.355 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:15.355 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:15.355 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:15.355 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:15.355 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.355 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:15.355 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.355 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:15.355 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.355 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:15.355 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:15.355 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:15.355 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:15.355 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:15.355 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:15.355 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:15.355 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:15.355 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:15.356 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:15.356 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:15.356 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.356 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.356 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.356 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:15.356 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:15.356 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:15.356 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:15.356 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:15.356 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.356 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.356 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.356 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:15.356 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:15.356 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:15.356 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:15.356 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:15.356 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:15.356 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:15.356 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:15.356 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.356 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:15.356 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.356 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:15.356 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.356 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:25:15.356 20:05:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:25:15.928 [2024-07-24 20:05:03.707347] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:15.928 [2024-07-24 20:05:03.707368] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:15.928 [2024-07-24 20:05:03.707382] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:15.928 [2024-07-24 20:05:03.794666] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:16.189 [2024-07-24 20:05:04.019813] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:16.189 [2024-07-24 20:05:04.019833] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:16.451 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:16.451 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:16.451 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:16.451 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:16.451 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:16.451 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.451 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:16.451 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.451 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:16.451 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.451 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.451 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:16.451 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:16.451 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:16.451 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:16.451 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:16.451 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:16.451 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:16.451 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:16.451 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:16.451 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.451 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:16.451 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.451 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:16.451 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.451 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:16.451 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:16.451 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:16.451 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:16.451 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:16.451 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:16.451 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:16.451 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:16.451 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:16.451 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:16.451 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.451 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:16.451 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.451 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:16.451 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.713 [2024-07-24 20:05:04.585227] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:16.713 [2024-07-24 20:05:04.586235] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:16.713 [2024-07-24 20:05:04.586262] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.713 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:16.974 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.974 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:16.974 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:16.974 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:16.974 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:16.974 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:16.974 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:16.974 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:16.974 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:16.974 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:16.974 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.974 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.974 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:16.974 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:16.974 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:16.974 [2024-07-24 20:05:04.715084] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:16.974 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.974 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:16.974 20:05:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:25:16.974 [2024-07-24 20:05:04.822070] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:16.974 [2024-07-24 20:05:04.822092] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:16.974 [2024-07-24 20:05:04.822098] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:17.917 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:17.917 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:17.917 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:17.917 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:17.917 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:17.917 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.917 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:17.917 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.917 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:17.917 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.917 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:17.917 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:17.917 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:17.917 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:17.917 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:17.917 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:17.917 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:17.917 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:17.917 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:17.917 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:17.917 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:17.917 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:17.917 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.917 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.917 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.917 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:17.917 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:17.917 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:17.917 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:17.917 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:17.917 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.917 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.180 [2024-07-24 20:05:05.873914] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:18.180 [2024-07-24 20:05:05.873937] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:18.180 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.180 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:18.180 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:18.180 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:18.180 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:18.180 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:18.180 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:18.180 [2024-07-24 20:05:05.881528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.180 [2024-07-24 20:05:05.881549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.180 [2024-07-24 20:05:05.881559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.180 [2024-07-24 20:05:05.881566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.180 [2024-07-24 20:05:05.881575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.180 [2024-07-24 20:05:05.881582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.180 [2024-07-24 20:05:05.881590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.180 [2024-07-24 20:05:05.881598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.180 [2024-07-24 20:05:05.881605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ee9d0 is same with the state(5) to be set 00:25:18.180 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:18.180 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.180 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.180 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:18.180 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:18.180 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:18.180 [2024-07-24 20:05:05.891541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ee9d0 (9): Bad file descriptor 00:25:18.180 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.180 [2024-07-24 20:05:05.901580] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:18.180 [2024-07-24 20:05:05.902066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.180 [2024-07-24 20:05:05.902082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ee9d0 with addr=10.0.0.2, port=4420 00:25:18.180 [2024-07-24 20:05:05.902090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ee9d0 is same with the state(5) to be set 00:25:18.180 [2024-07-24 20:05:05.902102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ee9d0 (9): Bad file descriptor 00:25:18.180 [2024-07-24 20:05:05.902119] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:18.180 [2024-07-24 20:05:05.902126] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:18.180 [2024-07-24 20:05:05.902133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:18.180 [2024-07-24 20:05:05.902145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.180 [2024-07-24 20:05:05.911636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:18.180 [2024-07-24 20:05:05.911769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.180 [2024-07-24 20:05:05.911782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ee9d0 with addr=10.0.0.2, port=4420 00:25:18.180 [2024-07-24 20:05:05.911794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ee9d0 is same with the state(5) to be set 00:25:18.180 [2024-07-24 20:05:05.911806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ee9d0 (9): Bad file descriptor 00:25:18.180 [2024-07-24 20:05:05.911817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:18.180 [2024-07-24 20:05:05.911823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:18.180 [2024-07-24 20:05:05.911830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:18.180 [2024-07-24 20:05:05.911840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.180 [2024-07-24 20:05:05.921689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:18.180 [2024-07-24 20:05:05.922136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.180 [2024-07-24 20:05:05.922150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ee9d0 with addr=10.0.0.2, port=4420 00:25:18.180 [2024-07-24 20:05:05.922157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ee9d0 is same with the state(5) to be set 00:25:18.180 [2024-07-24 20:05:05.922169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ee9d0 (9): Bad file descriptor 00:25:18.180 [2024-07-24 20:05:05.922187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:18.180 [2024-07-24 20:05:05.922193] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:18.180 [2024-07-24 20:05:05.922206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:18.180 [2024-07-24 20:05:05.922217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.180 [2024-07-24 20:05:05.931744] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:18.180 [2024-07-24 20:05:05.932134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.180 [2024-07-24 20:05:05.932147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ee9d0 with addr=10.0.0.2, port=4420 00:25:18.180 [2024-07-24 20:05:05.932154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ee9d0 is same with the state(5) to be set 00:25:18.180 [2024-07-24 20:05:05.932165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ee9d0 (9): Bad file descriptor 00:25:18.180 [2024-07-24 20:05:05.932175] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:18.180 [2024-07-24 20:05:05.932182] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:18.180 [2024-07-24 20:05:05.932189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:18.180 [2024-07-24 20:05:05.932199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.180 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.180 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:18.180 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:18.180 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:18.180 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:18.180 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:18.180 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:18.180 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:18.180 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:18.180 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:18.180 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.180 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:18.180 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.180 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:18.180 [2024-07-24 20:05:05.941797] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:18.180 [2024-07-24 20:05:05.942248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.181 [2024-07-24 20:05:05.942272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ee9d0 with addr=10.0.0.2, port=4420 00:25:18.181 [2024-07-24 20:05:05.942281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ee9d0 is same with the state(5) to be set 00:25:18.181 [2024-07-24 20:05:05.942295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ee9d0 (9): Bad file descriptor 00:25:18.181 [2024-07-24 20:05:05.942316] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:18.181 [2024-07-24 20:05:05.942323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:18.181 [2024-07-24 20:05:05.942330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:18.181 [2024-07-24 20:05:05.942341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.181 [2024-07-24 20:05:05.951850] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:18.181 [2024-07-24 20:05:05.952424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.181 [2024-07-24 20:05:05.952463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ee9d0 with addr=10.0.0.2, port=4420 00:25:18.181 [2024-07-24 20:05:05.952473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ee9d0 is same with the state(5) to be set 00:25:18.181 [2024-07-24 20:05:05.952492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ee9d0 (9): Bad file descriptor 00:25:18.181 [2024-07-24 20:05:05.952516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:18.181 [2024-07-24 20:05:05.952524] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:18.181 [2024-07-24 20:05:05.952532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:18.181 [2024-07-24 20:05:05.952547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.181 [2024-07-24 20:05:05.961908] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:18.181 [2024-07-24 20:05:05.962489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.181 [2024-07-24 20:05:05.962527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ee9d0 with addr=10.0.0.2, port=4420 00:25:18.181 [2024-07-24 20:05:05.962538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ee9d0 is same with the state(5) to be set 00:25:18.181 [2024-07-24 20:05:05.962556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ee9d0 (9): Bad file descriptor 00:25:18.181 [2024-07-24 20:05:05.962595] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:18.181 [2024-07-24 20:05:05.962613] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:18.181 [2024-07-24 20:05:05.962643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:18.181 [2024-07-24 20:05:05.962653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:18.181 [2024-07-24 20:05:05.962661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:18.181 [2024-07-24 20:05:05.962675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.181 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.181 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:18.181 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:18.181 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:18.181 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:18.181 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:18.181 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:18.181 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:18.181 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:18.181 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:18.181 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:18.181 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.181 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:18.181 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.181 20:05:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:18.181 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.181 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:25:18.181 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:18.181 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:18.181 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:18.181 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:18.181 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:18.181 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:18.181 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:18.181 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:18.181 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:18.181 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:18.181 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:18.181 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.181 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.181 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.181 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:18.181 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:18.181 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:18.181 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:18.181 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:18.181 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.181 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.181 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.181 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:18.181 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:18.181 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:18.181 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:18.181 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:18.181 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:18.181 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:18.181 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:18.181 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.181 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:18.181 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.181 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:18.181 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.442 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:25:18.442 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:18.442 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:18.442 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:18.442 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:18.442 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:18.442 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:18.442 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:18.442 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:18.442 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:18.442 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:18.442 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.442 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:18.442 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.442 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.442 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:25:18.442 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:18.442 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:18.442 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:18.442 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:18.442 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:18.442 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:18.442 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:18.442 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:18.442 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:18.442 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:18.442 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:18.442 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.442 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.442 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.442 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:18.442 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:18.442 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:18.442 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:18.442 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:18.442 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.442 20:05:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.385 [2024-07-24 20:05:07.296384] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:19.385 [2024-07-24 20:05:07.296401] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:19.385 [2024-07-24 20:05:07.296414] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:19.646 [2024-07-24 20:05:07.423832] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:19.646 [2024-07-24 20:05:07.529990] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:19.646 [2024-07-24 20:05:07.530023] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:19.646 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.646 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:19.646 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:19.646 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:19.646 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:19.646 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:19.646 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:19.646 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:19.646 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:19.646 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.646 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.646 request: 00:25:19.646 { 00:25:19.646 "name": "nvme", 00:25:19.646 "trtype": "tcp", 00:25:19.646 "traddr": "10.0.0.2", 00:25:19.646 "adrfam": "ipv4", 00:25:19.646 "trsvcid": "8009", 00:25:19.646 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:19.646 "wait_for_attach": true, 00:25:19.646 "method": "bdev_nvme_start_discovery", 00:25:19.646 "req_id": 1 00:25:19.646 } 00:25:19.646 Got JSON-RPC error response 00:25:19.646 response: 00:25:19.646 { 00:25:19.646 "code": -17, 00:25:19.646 "message": "File exists" 00:25:19.646 } 00:25:19.646 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:19.646 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:19.646 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:19.646 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:19.646 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:19.646 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:19.646 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:19.646 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:19.646 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.646 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:19.646 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.646 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:19.646 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.931 request: 00:25:19.931 { 00:25:19.931 "name": "nvme_second", 00:25:19.931 "trtype": "tcp", 00:25:19.931 "traddr": "10.0.0.2", 00:25:19.931 "adrfam": "ipv4", 00:25:19.931 "trsvcid": "8009", 00:25:19.931 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:19.931 "wait_for_attach": true, 00:25:19.931 "method": "bdev_nvme_start_discovery", 00:25:19.931 "req_id": 1 00:25:19.931 } 00:25:19.931 Got JSON-RPC error response 00:25:19.931 response: 00:25:19.931 { 00:25:19.931 "code": -17, 00:25:19.931 "message": "File exists" 00:25:19.931 } 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.931 20:05:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.896 [2024-07-24 20:05:08.793523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.896 [2024-07-24 20:05:08.793554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb530 with addr=10.0.0.2, port=8010 00:25:20.896 [2024-07-24 20:05:08.793567] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:20.896 [2024-07-24 20:05:08.793574] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:20.896 [2024-07-24 20:05:08.793580] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:22.282 [2024-07-24 20:05:09.796070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.282 [2024-07-24 20:05:09.796094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb530 with addr=10.0.0.2, port=8010 00:25:22.282 [2024-07-24 20:05:09.796105] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:22.282 [2024-07-24 20:05:09.796112] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:22.282 [2024-07-24 20:05:09.796118] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:22.854 [2024-07-24 20:05:10.797952] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:22.854 request: 00:25:22.854 { 00:25:22.854 "name": "nvme_second", 00:25:22.854 "trtype": "tcp", 00:25:22.854 "traddr": "10.0.0.2", 00:25:22.854 "adrfam": "ipv4", 00:25:22.854 "trsvcid": "8010", 00:25:22.854 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:22.854 "wait_for_attach": false, 00:25:22.854 "attach_timeout_ms": 3000, 00:25:22.854 "method": "bdev_nvme_start_discovery", 00:25:22.854 "req_id": 1 00:25:22.854 } 00:25:22.854 Got JSON-RPC error response 00:25:22.854 response: 00:25:22.854 { 00:25:22.854 "code": -110, 00:25:22.854 "message": "Connection timed out" 00:25:22.854 } 00:25:22.854 20:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:22.854 20:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:22.854 20:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:22.854 20:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:22.854 20:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:22.854 20:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:23.116 20:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:23.116 20:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:23.116 20:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.116 20:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:23.116 20:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.116 20:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:23.116 20:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.116 20:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:23.116 20:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:23.116 20:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3795735 00:25:23.116 20:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:23.116 20:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:23.116 20:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:25:23.116 20:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:23.116 20:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:25:23.116 20:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:23.116 20:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:23.116 rmmod nvme_tcp 00:25:23.116 rmmod nvme_fabrics 00:25:23.116 rmmod nvme_keyring 00:25:23.116 20:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:23.116 20:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:25:23.116 20:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:25:23.116 20:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3795614 ']' 00:25:23.116 20:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3795614 00:25:23.116 20:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 3795614 ']' 00:25:23.116 20:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 3795614 00:25:23.116 20:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:25:23.116 20:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:23.116 20:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3795614 00:25:23.116 20:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:23.116 20:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:23.116 20:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3795614' 00:25:23.116 killing process with pid 3795614 00:25:23.116 20:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 3795614 00:25:23.116 20:05:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 3795614 00:25:23.377 20:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:23.377 20:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:23.377 20:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:23.377 20:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:23.377 20:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:23.377 20:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:23.377 20:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:23.377 20:05:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.292 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:25.292 00:25:25.292 real 0m19.524s 00:25:25.292 user 0m22.968s 00:25:25.292 sys 0m6.662s 00:25:25.292 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:25.292 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.292 ************************************ 00:25:25.292 END TEST nvmf_host_discovery 00:25:25.292 ************************************ 00:25:25.292 20:05:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:25.292 20:05:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:25.292 20:05:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:25.292 20:05:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.559 ************************************ 00:25:25.559 START TEST nvmf_host_multipath_status 00:25:25.559 ************************************ 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:25.559 * Looking for test storage... 00:25:25.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:25.559 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:25.560 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:25:25.560 20:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:33.709 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:33.709 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:25:33.709 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:33.709 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:33.709 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:33.709 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:33.709 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:33.709 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:25:33.709 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:33.709 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:25:33.709 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:25:33.709 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:25:33.709 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:25:33.709 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:25:33.709 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:33.710 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:33.710 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:33.710 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:33.710 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:33.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:33.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.698 ms 00:25:33.710 00:25:33.710 --- 10.0.0.2 ping statistics --- 00:25:33.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.710 rtt min/avg/max/mdev = 0.698/0.698/0.698/0.000 ms 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:33.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:33.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.393 ms 00:25:33.710 00:25:33.710 --- 10.0.0.1 ping statistics --- 00:25:33.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.710 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:33.710 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:33.711 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:33.711 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:33.711 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:33.711 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:33.711 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:33.711 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:33.711 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3801899 00:25:33.711 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3801899 00:25:33.711 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:33.711 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 3801899 ']' 00:25:33.711 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:33.711 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:33.711 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:33.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:33.711 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:33.711 20:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:33.711 [2024-07-24 20:05:20.632337] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:25:33.711 [2024-07-24 20:05:20.632404] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:33.711 EAL: No free 2048 kB hugepages reported on node 1 00:25:33.711 [2024-07-24 20:05:20.703175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:33.711 [2024-07-24 20:05:20.776895] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:33.711 [2024-07-24 20:05:20.776932] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:33.711 [2024-07-24 20:05:20.776940] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:33.711 [2024-07-24 20:05:20.776946] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:33.711 [2024-07-24 20:05:20.776952] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:33.711 [2024-07-24 20:05:20.777096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:33.711 [2024-07-24 20:05:20.777097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.711 20:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:33.711 20:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:25:33.711 20:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:33.711 20:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:33.711 20:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:33.711 20:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:33.711 20:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3801899 00:25:33.711 20:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:33.711 [2024-07-24 20:05:21.581104] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:33.711 20:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:33.972 Malloc0 00:25:33.972 20:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:33.972 20:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:34.233 20:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:34.494 [2024-07-24 20:05:22.205700] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:34.494 20:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:34.494 [2024-07-24 20:05:22.362083] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:34.494 20:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3802265 00:25:34.494 20:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:34.494 20:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:34.494 20:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3802265 /var/tmp/bdevperf.sock 00:25:34.494 20:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 3802265 ']' 00:25:34.494 20:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:34.494 20:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:34.494 20:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:34.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:34.494 20:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:34.494 20:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:35.437 20:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:35.437 20:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:25:35.437 20:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:35.437 20:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:25:36.008 Nvme0n1 00:25:36.008 20:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:36.267 Nvme0n1 00:25:36.267 20:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:36.267 20:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:38.810 20:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:38.810 20:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:38.810 20:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:38.810 20:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:39.753 20:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:39.753 20:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:39.753 20:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.753 20:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:39.753 20:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.753 20:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:39.753 20:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.753 20:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:40.013 20:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:40.013 20:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:40.013 20:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:40.013 20:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.274 20:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.274 20:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:40.274 20:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.274 20:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:40.274 20:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.274 20:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:40.274 20:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.274 20:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:40.534 20:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.534 20:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:40.534 20:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.534 20:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:40.795 20:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.795 20:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:40.795 20:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:40.795 20:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:41.055 20:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:42.072 20:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:42.072 20:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:42.072 20:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.072 20:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:42.334 20:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:42.334 20:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:42.334 20:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.334 20:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:42.334 20:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.334 20:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:42.334 20:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.334 20:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:42.596 20:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.596 20:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:42.596 20:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:42.596 20:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.857 20:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.857 20:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:42.857 20:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.857 20:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:42.857 20:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.857 20:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:42.857 20:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.857 20:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:43.118 20:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.118 20:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:43.118 20:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:43.118 20:05:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:43.379 20:05:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:44.323 20:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:44.323 20:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:44.323 20:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.323 20:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:44.584 20:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.584 20:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:44.584 20:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.584 20:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:44.845 20:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:44.845 20:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:44.845 20:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.845 20:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:44.845 20:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.845 20:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:44.845 20:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.845 20:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:45.105 20:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.105 20:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:45.105 20:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.105 20:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:45.366 20:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.366 20:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:45.366 20:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.366 20:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:45.366 20:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.366 20:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:45.366 20:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:45.625 20:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:45.886 20:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:46.829 20:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:46.829 20:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:46.830 20:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.830 20:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:47.090 20:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.090 20:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:47.090 20:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.090 20:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:47.090 20:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:47.090 20:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:47.090 20:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.090 20:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:47.351 20:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.351 20:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:47.351 20:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.351 20:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:47.612 20:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.612 20:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:47.612 20:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.612 20:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:47.612 20:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.612 20:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:47.612 20:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.612 20:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:47.873 20:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:47.873 20:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:47.873 20:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:47.873 20:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:48.135 20:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:49.077 20:05:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:49.077 20:05:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:49.077 20:05:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.077 20:05:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:49.338 20:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:49.338 20:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:49.338 20:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.338 20:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:49.599 20:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:49.599 20:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:49.599 20:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.599 20:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:49.599 20:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.599 20:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:49.599 20:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.599 20:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:49.859 20:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.859 20:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:49.859 20:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.859 20:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:50.120 20:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:50.120 20:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:50.120 20:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.120 20:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:50.120 20:05:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:50.120 20:05:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:50.120 20:05:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:50.381 20:05:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:50.642 20:05:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:51.584 20:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:51.584 20:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:51.584 20:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.584 20:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:51.584 20:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:51.584 20:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:51.584 20:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.584 20:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:51.845 20:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.845 20:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:51.845 20:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.845 20:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:52.105 20:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.105 20:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:52.105 20:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.105 20:05:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:52.105 20:05:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.105 20:05:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:52.105 20:05:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.105 20:05:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:52.366 20:05:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:52.366 20:05:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:52.366 20:05:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.366 20:05:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:52.626 20:05:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.627 20:05:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:52.627 20:05:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:52.627 20:05:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:52.888 20:05:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:53.150 20:05:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:54.092 20:05:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:54.092 20:05:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:54.092 20:05:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.092 20:05:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:54.092 20:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.092 20:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:54.092 20:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.092 20:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:54.353 20:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.353 20:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:54.353 20:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.353 20:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:54.614 20:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.614 20:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:54.614 20:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.614 20:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:54.614 20:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.614 20:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:54.614 20:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.614 20:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:54.876 20:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.876 20:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:54.876 20:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.876 20:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:55.136 20:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.136 20:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:55.136 20:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:55.136 20:05:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:55.397 20:05:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:56.338 20:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:56.338 20:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:56.338 20:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.338 20:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:56.599 20:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:56.599 20:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:56.599 20:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.599 20:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:56.876 20:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.876 20:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:56.876 20:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.876 20:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:56.876 20:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.876 20:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:56.876 20:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.876 20:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:57.161 20:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.161 20:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:57.161 20:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.161 20:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:57.161 20:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.422 20:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:57.422 20:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.422 20:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:57.422 20:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.422 20:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:57.422 20:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:57.684 20:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:57.684 20:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:59.070 20:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:59.070 20:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:59.070 20:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.070 20:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:59.070 20:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.070 20:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:59.070 20:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.070 20:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:59.070 20:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.070 20:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:59.070 20:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.071 20:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:59.331 20:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.331 20:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:59.331 20:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.331 20:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:59.592 20:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.592 20:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:59.592 20:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.592 20:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:59.592 20:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.592 20:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:59.592 20:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.592 20:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:59.853 20:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.853 20:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:59.853 20:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:00.114 20:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:00.114 20:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:01.055 20:05:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:01.055 20:05:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:01.055 20:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.055 20:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:01.316 20:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.316 20:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:01.316 20:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.316 20:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:01.577 20:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:01.577 20:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:01.577 20:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.577 20:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:01.577 20:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.577 20:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:01.577 20:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.577 20:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:01.838 20:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.838 20:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:01.838 20:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.838 20:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:02.099 20:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.099 20:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:02.099 20:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.099 20:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:02.099 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:02.099 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3802265 00:26:02.099 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 3802265 ']' 00:26:02.099 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 3802265 00:26:02.099 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:26:02.099 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:02.099 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3802265 00:26:02.364 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:26:02.364 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:26:02.364 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3802265' 00:26:02.364 killing process with pid 3802265 00:26:02.364 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 3802265 00:26:02.364 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 3802265 00:26:02.364 Connection closed with partial response: 00:26:02.364 00:26:02.364 00:26:02.364 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3802265 00:26:02.364 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:02.364 [2024-07-24 20:05:22.425410] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:26:02.364 [2024-07-24 20:05:22.425470] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3802265 ] 00:26:02.364 EAL: No free 2048 kB hugepages reported on node 1 00:26:02.364 [2024-07-24 20:05:22.475162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.364 [2024-07-24 20:05:22.526931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:02.364 Running I/O for 90 seconds... 00:26:02.364 [2024-07-24 20:05:35.803306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.364 [2024-07-24 20:05:35.803341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:02.364 [2024-07-24 20:05:35.803371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.364 [2024-07-24 20:05:35.803378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:02.364 [2024-07-24 20:05:35.803389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.364 [2024-07-24 20:05:35.803394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:02.364 [2024-07-24 20:05:35.803405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.364 [2024-07-24 20:05:35.803410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:02.364 [2024-07-24 20:05:35.803420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.364 [2024-07-24 20:05:35.803425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:02.364 [2024-07-24 20:05:35.803435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.364 [2024-07-24 20:05:35.803440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:02.364 [2024-07-24 20:05:35.803450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.364 [2024-07-24 20:05:35.803455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:02.364 [2024-07-24 20:05:35.803466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:29768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.364 [2024-07-24 20:05:35.803471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:02.364 [2024-07-24 20:05:35.803604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.364 [2024-07-24 20:05:35.803612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:02.364 [2024-07-24 20:05:35.803624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.364 [2024-07-24 20:05:35.803629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:02.364 [2024-07-24 20:05:35.803640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.364 [2024-07-24 20:05:35.803654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:02.364 [2024-07-24 20:05:35.803665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:29800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.364 [2024-07-24 20:05:35.803670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.364 [2024-07-24 20:05:35.803680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.364 [2024-07-24 20:05:35.803685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:02.364 [2024-07-24 20:05:35.803696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.364 [2024-07-24 20:05:35.803701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:02.364 [2024-07-24 20:05:35.803712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.364 [2024-07-24 20:05:35.803718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:02.364 [2024-07-24 20:05:35.803729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.364 [2024-07-24 20:05:35.803735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:02.364 [2024-07-24 20:05:35.803878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:29840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.364 [2024-07-24 20:05:35.803885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:02.364 [2024-07-24 20:05:35.803897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.364 [2024-07-24 20:05:35.803902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:02.364 [2024-07-24 20:05:35.803913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.364 [2024-07-24 20:05:35.803918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:02.364 [2024-07-24 20:05:35.803930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.364 [2024-07-24 20:05:35.803935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:02.364 [2024-07-24 20:05:35.803946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:29872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.364 [2024-07-24 20:05:35.803951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:02.364 [2024-07-24 20:05:35.803962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.364 [2024-07-24 20:05:35.803967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:02.364 [2024-07-24 20:05:35.803978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.364 [2024-07-24 20:05:35.803983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:02.364 [2024-07-24 20:05:35.803996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:29896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.364 [2024-07-24 20:05:35.804001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:02.364 [2024-07-24 20:05:35.804183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.364 [2024-07-24 20:05:35.804190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:02.364 [2024-07-24 20:05:35.804207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.364 [2024-07-24 20:05:35.804212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:02.364 [2024-07-24 20:05:35.804224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:29920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.364 [2024-07-24 20:05:35.804229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:02.364 [2024-07-24 20:05:35.804240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.364 [2024-07-24 20:05:35.804245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:02.364 [2024-07-24 20:05:35.804257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.364 [2024-07-24 20:05:35.804262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:02.365 [2024-07-24 20:05:35.804273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.365 [2024-07-24 20:05:35.804278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:02.365 [2024-07-24 20:05:35.804290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:29952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.365 [2024-07-24 20:05:35.804294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:02.365 [2024-07-24 20:05:35.804306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.365 [2024-07-24 20:05:35.804312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:02.365 [2024-07-24 20:05:35.805906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.365 [2024-07-24 20:05:35.805917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:02.365 [2024-07-24 20:05:35.806018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.365 [2024-07-24 20:05:35.806025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:02.365 [2024-07-24 20:05:35.806037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.365 [2024-07-24 20:05:35.806042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:02.365 [2024-07-24 20:05:35.806056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.365 [2024-07-24 20:05:35.806062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:02.365 [2024-07-24 20:05:35.806074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:29016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.365 [2024-07-24 20:05:35.806079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:02.365 [2024-07-24 20:05:35.806092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:29024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.365 [2024-07-24 20:05:35.806097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:02.365 [2024-07-24 20:05:35.806109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:29032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.365 [2024-07-24 20:05:35.806114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:02.365 [2024-07-24 20:05:35.806126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:29040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.365 [2024-07-24 20:05:35.806131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.365 [2024-07-24 20:05:35.806143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:29048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.365 [2024-07-24 20:05:35.806148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:02.365 [2024-07-24 20:05:35.806160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:29056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.365 [2024-07-24 20:05:35.806165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:02.365 [2024-07-24 20:05:35.806178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:29064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.365 [2024-07-24 20:05:35.806183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.365 [2024-07-24 20:05:35.806195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:30000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.365 [2024-07-24 20:05:35.806204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.365 [2024-07-24 20:05:35.806217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.365 [2024-07-24 20:05:35.806221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:02.365 [2024-07-24 20:05:35.806234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.365 [2024-07-24 20:05:35.806239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:02.365 [2024-07-24 20:05:35.806251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:29080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.365 [2024-07-24 20:05:35.806256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:02.365 [2024-07-24 20:05:35.806268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.365 [2024-07-24 20:05:35.806274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:02.365 [2024-07-24 20:05:35.806287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:29096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.365 [2024-07-24 20:05:35.806292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:02.365 [2024-07-24 20:05:35.806336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.365 [2024-07-24 20:05:35.806343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:02.365 [2024-07-24 20:05:35.806388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:29112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.365 [2024-07-24 20:05:35.806394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:02.365 [2024-07-24 20:05:35.806408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:29120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.365 [2024-07-24 20:05:35.806413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:02.365 [2024-07-24 20:05:35.806426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:29128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.365 [2024-07-24 20:05:35.806431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:02.365 [2024-07-24 20:05:35.806445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:29136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.365 [2024-07-24 20:05:35.806450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:02.365 [2024-07-24 20:05:35.806463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.365 [2024-07-24 20:05:35.806468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:02.365 [2024-07-24 20:05:35.806481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:29144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.365 [2024-07-24 20:05:35.806486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:02.365 [2024-07-24 20:05:35.806499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.365 [2024-07-24 20:05:35.806504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:02.365 [2024-07-24 20:05:35.806517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.365 [2024-07-24 20:05:35.806522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:02.365 [2024-07-24 20:05:35.806536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:29168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.365 [2024-07-24 20:05:35.806541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:02.365 [2024-07-24 20:05:35.806554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:29176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.365 [2024-07-24 20:05:35.806561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:02.365 [2024-07-24 20:05:35.806574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:29184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.365 [2024-07-24 20:05:35.806579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:02.365 [2024-07-24 20:05:35.806592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:29192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.365 [2024-07-24 20:05:35.806597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:02.365 [2024-07-24 20:05:35.806610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:29200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.365 [2024-07-24 20:05:35.806616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:02.365 [2024-07-24 20:05:35.806629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:29208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.365 [2024-07-24 20:05:35.806634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:02.365 [2024-07-24 20:05:35.806647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:29216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.365 [2024-07-24 20:05:35.806652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:02.365 [2024-07-24 20:05:35.806665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:29224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.365 [2024-07-24 20:05:35.806669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:02.366 [2024-07-24 20:05:35.806682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:29232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.366 [2024-07-24 20:05:35.806688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:02.366 [2024-07-24 20:05:35.806701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:29240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.366 [2024-07-24 20:05:35.806706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:02.366 [2024-07-24 20:05:35.806719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:29248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.366 [2024-07-24 20:05:35.806724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:02.366 [2024-07-24 20:05:35.806737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:29256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.366 [2024-07-24 20:05:35.806742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:02.366 [2024-07-24 20:05:35.806755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:29264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.366 [2024-07-24 20:05:35.806760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:02.366 [2024-07-24 20:05:35.806773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.366 [2024-07-24 20:05:35.806779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:02.366 [2024-07-24 20:05:35.806792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:29280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.366 [2024-07-24 20:05:35.806797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:02.366 [2024-07-24 20:05:35.806810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:29288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.366 [2024-07-24 20:05:35.806815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:02.366 [2024-07-24 20:05:35.806828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:29296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.366 [2024-07-24 20:05:35.806834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:02.366 [2024-07-24 20:05:35.806847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:29304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.366 [2024-07-24 20:05:35.806852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.366 [2024-07-24 20:05:35.806895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:29312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.366 [2024-07-24 20:05:35.806900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:02.366 [2024-07-24 20:05:35.806914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:29320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.366 [2024-07-24 20:05:35.806919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:02.366 [2024-07-24 20:05:35.806961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.366 [2024-07-24 20:05:35.806967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:02.366 [2024-07-24 20:05:35.806980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.366 [2024-07-24 20:05:35.806985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:02.366 [2024-07-24 20:05:35.806998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:29344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.366 [2024-07-24 20:05:35.807003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:02.366 [2024-07-24 20:05:35.807016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.366 [2024-07-24 20:05:35.807021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:02.366 [2024-07-24 20:05:35.807034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.366 [2024-07-24 20:05:35.807039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:02.366 [2024-07-24 20:05:35.807052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:29368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.366 [2024-07-24 20:05:35.807057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:02.366 [2024-07-24 20:05:35.807071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:29376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.366 [2024-07-24 20:05:35.807076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:02.366 [2024-07-24 20:05:35.807089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:29384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.366 [2024-07-24 20:05:35.807094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:02.366 [2024-07-24 20:05:35.807107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:29392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.366 [2024-07-24 20:05:35.807113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:02.366 [2024-07-24 20:05:35.807127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:29400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.366 [2024-07-24 20:05:35.807133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:02.366 [2024-07-24 20:05:35.807146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:29408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.366 [2024-07-24 20:05:35.807151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:02.366 [2024-07-24 20:05:35.807164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:29416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.366 [2024-07-24 20:05:35.807169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:02.366 [2024-07-24 20:05:35.807183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.366 [2024-07-24 20:05:35.807188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:02.366 [2024-07-24 20:05:35.807205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:29432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.366 [2024-07-24 20:05:35.807211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:02.366 [2024-07-24 20:05:35.807224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:29440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.366 [2024-07-24 20:05:35.807229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:02.366 [2024-07-24 20:05:35.807242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:29448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.366 [2024-07-24 20:05:35.807247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:02.366 [2024-07-24 20:05:35.807261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:29456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.366 [2024-07-24 20:05:35.807266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:02.366 [2024-07-24 20:05:35.807279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:29464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.366 [2024-07-24 20:05:35.807285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:02.366 [2024-07-24 20:05:35.807373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:29472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.366 [2024-07-24 20:05:35.807380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:02.366 [2024-07-24 20:05:35.807396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:29480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.366 [2024-07-24 20:05:35.807401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:02.366 [2024-07-24 20:05:35.807420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:29488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.366 [2024-07-24 20:05:35.807425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:02.366 [2024-07-24 20:05:35.807441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:29496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.366 [2024-07-24 20:05:35.807446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:02.366 [2024-07-24 20:05:35.807463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:29504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.367 [2024-07-24 20:05:35.807469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:02.367 [2024-07-24 20:05:35.807485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:29512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.367 [2024-07-24 20:05:35.807490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:02.367 [2024-07-24 20:05:35.807506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.367 [2024-07-24 20:05:35.807512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:02.367 [2024-07-24 20:05:35.807528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:29528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.367 [2024-07-24 20:05:35.807533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:02.367 [2024-07-24 20:05:35.807549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:29536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.367 [2024-07-24 20:05:35.807555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:02.367 [2024-07-24 20:05:35.807571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:29544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.367 [2024-07-24 20:05:35.807576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:02.367 [2024-07-24 20:05:35.807592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:29552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.367 [2024-07-24 20:05:35.807597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:02.367 [2024-07-24 20:05:35.807614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:29560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.367 [2024-07-24 20:05:35.807619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.367 [2024-07-24 20:05:35.807635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:29568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.367 [2024-07-24 20:05:35.807642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:02.367 [2024-07-24 20:05:35.807658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:29576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.367 [2024-07-24 20:05:35.807663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:02.367 [2024-07-24 20:05:35.807679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:30024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.367 [2024-07-24 20:05:35.807684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:02.367 [2024-07-24 20:05:35.807700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.367 [2024-07-24 20:05:35.807706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:02.367 [2024-07-24 20:05:35.807722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:29584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.367 [2024-07-24 20:05:35.807727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:02.367 [2024-07-24 20:05:35.807743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:29592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.367 [2024-07-24 20:05:35.807749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:02.367 [2024-07-24 20:05:35.807765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:29600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.367 [2024-07-24 20:05:35.807770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:02.367 [2024-07-24 20:05:35.807786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:29608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.367 [2024-07-24 20:05:35.807792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:02.367 [2024-07-24 20:05:35.807808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:29616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.367 [2024-07-24 20:05:35.807814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:02.367 [2024-07-24 20:05:35.807830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:29624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.367 [2024-07-24 20:05:35.807835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:02.367 [2024-07-24 20:05:35.807851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:29632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.367 [2024-07-24 20:05:35.807857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:02.367 [2024-07-24 20:05:35.807873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:29640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.367 [2024-07-24 20:05:35.807878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:02.367 [2024-07-24 20:05:35.807894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:29648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.367 [2024-07-24 20:05:35.807901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:02.367 [2024-07-24 20:05:35.807917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:29656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.367 [2024-07-24 20:05:35.807922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:02.367 [2024-07-24 20:05:35.807938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:29664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.367 [2024-07-24 20:05:35.807944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:02.367 [2024-07-24 20:05:35.807959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:29672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.367 [2024-07-24 20:05:35.807965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:02.367 [2024-07-24 20:05:35.807981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.367 [2024-07-24 20:05:35.807986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:02.367 [2024-07-24 20:05:35.808002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:29688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.367 [2024-07-24 20:05:35.808007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:02.367 [2024-07-24 20:05:35.808023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:29696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.367 [2024-07-24 20:05:35.808029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:02.367 [2024-07-24 20:05:35.808045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:29704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.367 [2024-07-24 20:05:35.808050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:02.367 [2024-07-24 20:05:47.983378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.367 [2024-07-24 20:05:47.983414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:02.367 [2024-07-24 20:05:47.983444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.367 [2024-07-24 20:05:47.983450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:02.367 [2024-07-24 20:05:47.983462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.367 [2024-07-24 20:05:47.983467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:02.367 [2024-07-24 20:05:47.983478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.367 [2024-07-24 20:05:47.983483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:02.367 [2024-07-24 20:05:47.983493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.367 [2024-07-24 20:05:47.983498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:02.367 [2024-07-24 20:05:47.983512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.367 [2024-07-24 20:05:47.983517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:02.367 [2024-07-24 20:05:47.983528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.367 [2024-07-24 20:05:47.983533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:02.367 [2024-07-24 20:05:47.983543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.367 [2024-07-24 20:05:47.983548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:02.367 [2024-07-24 20:05:47.983558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.367 [2024-07-24 20:05:47.983563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:02.367 [2024-07-24 20:05:47.983573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.368 [2024-07-24 20:05:47.983578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:02.368 [2024-07-24 20:05:47.983588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.368 [2024-07-24 20:05:47.983593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:02.368 [2024-07-24 20:05:47.983603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.368 [2024-07-24 20:05:47.983608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:02.368 [2024-07-24 20:05:47.983618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.368 [2024-07-24 20:05:47.983623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:02.368 [2024-07-24 20:05:47.983633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.368 [2024-07-24 20:05:47.983638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:02.368 [2024-07-24 20:05:47.983649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.368 [2024-07-24 20:05:47.983654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:02.368 [2024-07-24 20:05:47.983664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.368 [2024-07-24 20:05:47.983668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:02.368 [2024-07-24 20:05:47.983779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.368 [2024-07-24 20:05:47.983787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:02.368 [2024-07-24 20:05:47.984878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.368 [2024-07-24 20:05:47.984926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:02.368 [2024-07-24 20:05:47.984940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.368 [2024-07-24 20:05:47.984945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:02.368 [2024-07-24 20:05:47.984956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.368 [2024-07-24 20:05:47.984961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:02.368 [2024-07-24 20:05:47.984972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.368 [2024-07-24 20:05:47.984976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:02.368 [2024-07-24 20:05:47.985463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.368 [2024-07-24 20:05:47.985475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:02.368 [2024-07-24 20:05:47.985486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.368 [2024-07-24 20:05:47.985492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:02.368 [2024-07-24 20:05:47.985502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.368 [2024-07-24 20:05:47.985508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:02.368 [2024-07-24 20:05:47.985518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.368 [2024-07-24 20:05:47.985523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:02.368 [2024-07-24 20:05:47.985533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.368 [2024-07-24 20:05:47.985538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:02.368 [2024-07-24 20:05:47.985548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.368 [2024-07-24 20:05:47.985553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:02.368 [2024-07-24 20:05:47.985563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.368 [2024-07-24 20:05:47.985568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:02.368 [2024-07-24 20:05:47.985578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.368 [2024-07-24 20:05:47.985583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:02.368 [2024-07-24 20:05:47.985594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.368 [2024-07-24 20:05:47.985602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:02.368 [2024-07-24 20:05:47.985612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.368 [2024-07-24 20:05:47.985617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.368 [2024-07-24 20:05:47.985627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.368 [2024-07-24 20:05:47.985632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:02.368 [2024-07-24 20:05:47.985642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.368 [2024-07-24 20:05:47.985647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:02.368 [2024-07-24 20:05:47.985657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.368 [2024-07-24 20:05:47.985662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:02.368 [2024-07-24 20:05:47.985673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.368 [2024-07-24 20:05:47.985677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:02.368 [2024-07-24 20:05:47.985935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.368 [2024-07-24 20:05:47.985944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:02.368 [2024-07-24 20:05:47.985994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.369 [2024-07-24 20:05:47.986001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:02.369 [2024-07-24 20:05:47.986012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.369 [2024-07-24 20:05:47.986017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:02.369 Received shutdown signal, test time was about 25.777440 seconds 00:26:02.369 00:26:02.369 Latency(us) 00:26:02.369 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.369 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:02.369 Verification LBA range: start 0x0 length 0x4000 00:26:02.369 Nvme0n1 : 25.78 10763.66 42.05 0.00 0.00 11870.51 1112.75 3019898.88 00:26:02.369 =================================================================================================================== 00:26:02.369 Total : 10763.66 42.05 0.00 0.00 11870.51 1112.75 3019898.88 00:26:02.369 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:02.630 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:02.630 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:02.630 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:02.630 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:02.630 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:26:02.630 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:02.630 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:26:02.630 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:02.630 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:02.630 rmmod nvme_tcp 00:26:02.630 rmmod nvme_fabrics 00:26:02.630 rmmod nvme_keyring 00:26:02.630 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:02.630 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:26:02.630 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:26:02.630 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3801899 ']' 00:26:02.630 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3801899 00:26:02.630 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 3801899 ']' 00:26:02.630 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 3801899 00:26:02.630 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:26:02.630 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:02.630 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3801899 00:26:02.630 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:02.630 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:02.630 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3801899' 00:26:02.630 killing process with pid 3801899 00:26:02.630 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 3801899 00:26:02.630 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 3801899 00:26:02.892 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:02.892 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:02.892 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:02.892 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:02.892 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:02.892 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.892 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:02.892 20:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.806 20:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:04.806 00:26:04.806 real 0m39.456s 00:26:04.806 user 1m38.114s 00:26:04.806 sys 0m12.335s 00:26:04.806 20:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:04.806 20:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:04.806 ************************************ 00:26:04.806 END TEST nvmf_host_multipath_status 00:26:04.806 ************************************ 00:26:04.806 20:05:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:04.806 20:05:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:04.806 20:05:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:04.806 20:05:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.068 ************************************ 00:26:05.068 START TEST nvmf_discovery_remove_ifc 00:26:05.068 ************************************ 00:26:05.068 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:05.068 * Looking for test storage... 00:26:05.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:26:05.069 20:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:13.216 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:13.216 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:26:13.216 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:13.216 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:13.217 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:13.217 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:13.217 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:13.217 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:13.217 20:05:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:13.217 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:13.217 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:13.217 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:26:13.217 00:26:13.217 --- 10.0.0.2 ping statistics --- 00:26:13.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.217 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:26:13.217 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:13.217 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:13.217 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.366 ms 00:26:13.217 00:26:13.217 --- 10.0.0.1 ping statistics --- 00:26:13.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.217 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:26:13.217 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:13.217 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:26:13.218 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:13.218 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:13.218 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:13.218 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:13.218 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:13.218 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:13.218 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:13.218 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:13.218 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:13.218 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:13.218 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:13.218 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3812012 00:26:13.218 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3812012 00:26:13.218 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 3812012 ']' 00:26:13.218 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.218 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:13.218 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.218 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:13.218 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:13.218 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:13.218 [2024-07-24 20:06:00.114319] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:26:13.218 [2024-07-24 20:06:00.114378] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:13.218 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.218 [2024-07-24 20:06:00.200633] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.218 [2024-07-24 20:06:00.292480] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:13.218 [2024-07-24 20:06:00.292543] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:13.218 [2024-07-24 20:06:00.292552] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:13.218 [2024-07-24 20:06:00.292559] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:13.218 [2024-07-24 20:06:00.292565] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:13.218 [2024-07-24 20:06:00.292590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.218 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:13.218 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:26:13.218 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:13.218 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:13.218 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:13.218 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:13.218 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:13.218 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.218 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:13.218 [2024-07-24 20:06:00.915721] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:13.218 [2024-07-24 20:06:00.923865] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:13.218 null0 00:26:13.218 [2024-07-24 20:06:00.955882] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:13.218 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.218 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3812224 00:26:13.218 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:13.218 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3812224 /tmp/host.sock 00:26:13.218 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 3812224 ']' 00:26:13.218 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:26:13.218 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:13.218 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:13.218 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:13.218 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:13.218 20:06:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:13.218 [2024-07-24 20:06:01.001308] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:26:13.218 [2024-07-24 20:06:01.001349] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3812224 ] 00:26:13.218 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.218 [2024-07-24 20:06:01.053220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.218 [2024-07-24 20:06:01.118035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.218 20:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:13.218 20:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:26:13.218 20:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:13.218 20:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:13.218 20:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.218 20:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:13.218 20:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.218 20:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:13.218 20:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.218 20:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:13.483 20:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.483 20:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:13.483 20:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.483 20:06:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:14.425 [2024-07-24 20:06:02.264434] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:14.425 [2024-07-24 20:06:02.264456] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:14.425 [2024-07-24 20:06:02.264471] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:14.425 [2024-07-24 20:06:02.353765] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:14.686 [2024-07-24 20:06:02.456422] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:14.686 [2024-07-24 20:06:02.456471] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:14.686 [2024-07-24 20:06:02.456495] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:14.686 [2024-07-24 20:06:02.456508] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:14.686 [2024-07-24 20:06:02.456529] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:14.686 20:06:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.686 20:06:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:14.686 20:06:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:14.686 20:06:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:14.686 20:06:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:14.686 [2024-07-24 20:06:02.462620] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1f827f0 was disconnected and freed. delete nvme_qpair. 00:26:14.686 20:06:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.686 20:06:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:14.686 20:06:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:14.686 20:06:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:14.686 20:06:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.686 20:06:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:14.686 20:06:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:14.686 20:06:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:14.947 20:06:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:14.947 20:06:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:14.947 20:06:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:14.947 20:06:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:14.947 20:06:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.947 20:06:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:14.947 20:06:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:14.947 20:06:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:14.947 20:06:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.947 20:06:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:14.947 20:06:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:15.890 20:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:15.890 20:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:15.890 20:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:15.890 20:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:15.890 20:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.890 20:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:15.890 20:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:15.891 20:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.891 20:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:15.891 20:06:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:16.834 20:06:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:16.834 20:06:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:16.834 20:06:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:16.834 20:06:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:16.834 20:06:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.834 20:06:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:16.834 20:06:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:16.834 20:06:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.095 20:06:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:17.095 20:06:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:18.098 20:06:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:18.098 20:06:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:18.098 20:06:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:18.098 20:06:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:18.098 20:06:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.099 20:06:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:18.099 20:06:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:18.099 20:06:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.099 20:06:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:18.099 20:06:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:19.041 20:06:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:19.041 20:06:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:19.041 20:06:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:19.041 20:06:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:19.041 20:06:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.041 20:06:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:19.041 20:06:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:19.041 20:06:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.041 20:06:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:19.041 20:06:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:19.984 [2024-07-24 20:06:07.896953] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:19.984 [2024-07-24 20:06:07.897003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.984 [2024-07-24 20:06:07.897015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.984 [2024-07-24 20:06:07.897025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.984 [2024-07-24 20:06:07.897032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.984 [2024-07-24 20:06:07.897041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.984 [2024-07-24 20:06:07.897048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.984 [2024-07-24 20:06:07.897055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.985 [2024-07-24 20:06:07.897063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.985 [2024-07-24 20:06:07.897071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.985 [2024-07-24 20:06:07.897078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.985 [2024-07-24 20:06:07.897085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f49060 is same with the state(5) to be set 00:26:19.985 [2024-07-24 20:06:07.906972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f49060 (9): Bad file descriptor 00:26:19.985 [2024-07-24 20:06:07.917010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:20.246 20:06:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:20.246 20:06:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:20.246 20:06:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:20.246 20:06:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:20.246 20:06:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.246 20:06:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:20.246 20:06:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:21.189 [2024-07-24 20:06:08.977245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:21.189 [2024-07-24 20:06:08.977290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f49060 with addr=10.0.0.2, port=4420 00:26:21.189 [2024-07-24 20:06:08.977305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f49060 is same with the state(5) to be set 00:26:21.189 [2024-07-24 20:06:08.977335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f49060 (9): Bad file descriptor 00:26:21.189 [2024-07-24 20:06:08.977712] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:21.189 [2024-07-24 20:06:08.977736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:21.189 [2024-07-24 20:06:08.977744] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:21.189 [2024-07-24 20:06:08.977753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:21.189 [2024-07-24 20:06:08.977771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.189 [2024-07-24 20:06:08.977784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:21.189 20:06:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.189 20:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:21.189 20:06:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:22.133 [2024-07-24 20:06:09.980165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:22.133 [2024-07-24 20:06:09.980187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:22.133 [2024-07-24 20:06:09.980195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:22.133 [2024-07-24 20:06:09.980207] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:26:22.133 [2024-07-24 20:06:09.980221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.133 [2024-07-24 20:06:09.980241] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:22.133 [2024-07-24 20:06:09.980265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:22.133 [2024-07-24 20:06:09.980275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.133 [2024-07-24 20:06:09.980286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:22.133 [2024-07-24 20:06:09.980294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.133 [2024-07-24 20:06:09.980302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:22.133 [2024-07-24 20:06:09.980309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.133 [2024-07-24 20:06:09.980317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:22.133 [2024-07-24 20:06:09.980325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.133 [2024-07-24 20:06:09.980333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:22.133 [2024-07-24 20:06:09.980340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.134 [2024-07-24 20:06:09.980347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:22.134 [2024-07-24 20:06:09.980595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f484c0 (9): Bad file descriptor 00:26:22.134 [2024-07-24 20:06:09.981607] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:22.134 [2024-07-24 20:06:09.981617] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:22.134 20:06:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:22.134 20:06:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:22.134 20:06:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:22.134 20:06:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.134 20:06:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:22.134 20:06:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:22.134 20:06:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:22.134 20:06:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.134 20:06:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:22.134 20:06:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:22.134 20:06:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:22.395 20:06:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:22.395 20:06:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:22.395 20:06:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:22.395 20:06:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:22.395 20:06:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:22.395 20:06:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.395 20:06:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:22.395 20:06:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:22.395 20:06:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.395 20:06:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:22.395 20:06:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:23.338 20:06:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:23.338 20:06:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:23.338 20:06:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:23.338 20:06:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:23.338 20:06:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.338 20:06:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:23.338 20:06:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:23.338 20:06:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.338 20:06:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:23.338 20:06:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:24.281 [2024-07-24 20:06:12.038392] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:24.281 [2024-07-24 20:06:12.038415] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:24.281 [2024-07-24 20:06:12.038430] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:24.281 [2024-07-24 20:06:12.167841] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:24.281 [2024-07-24 20:06:12.228900] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:24.281 [2024-07-24 20:06:12.228941] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:24.281 [2024-07-24 20:06:12.228963] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:24.281 [2024-07-24 20:06:12.228978] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:24.281 [2024-07-24 20:06:12.228986] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:24.542 [2024-07-24 20:06:12.236108] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1f4fe50 was disconnected and freed. delete nvme_qpair. 00:26:24.542 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:24.542 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:24.542 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:24.542 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:24.542 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.542 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:24.542 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:24.542 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.542 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:24.542 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:24.542 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3812224 00:26:24.542 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 3812224 ']' 00:26:24.542 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 3812224 00:26:24.542 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:26:24.542 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:24.542 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3812224 00:26:24.542 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:24.542 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:24.542 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3812224' 00:26:24.542 killing process with pid 3812224 00:26:24.542 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 3812224 00:26:24.542 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 3812224 00:26:24.804 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:24.804 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:24.804 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:26:24.804 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:24.804 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:26:24.804 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:24.804 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:24.804 rmmod nvme_tcp 00:26:24.804 rmmod nvme_fabrics 00:26:24.804 rmmod nvme_keyring 00:26:24.804 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:24.804 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:26:24.804 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:26:24.804 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3812012 ']' 00:26:24.804 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3812012 00:26:24.804 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 3812012 ']' 00:26:24.804 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 3812012 00:26:24.804 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:26:24.804 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:24.804 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3812012 00:26:24.804 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:24.804 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:24.804 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3812012' 00:26:24.804 killing process with pid 3812012 00:26:24.804 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 3812012 00:26:24.804 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 3812012 00:26:24.804 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:24.804 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:24.804 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:24.804 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:24.804 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:24.804 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.804 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:24.804 20:06:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:27.352 20:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:27.352 00:26:27.352 real 0m22.013s 00:26:27.352 user 0m25.493s 00:26:27.352 sys 0m6.438s 00:26:27.352 20:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:27.352 20:06:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:27.352 ************************************ 00:26:27.352 END TEST nvmf_discovery_remove_ifc 00:26:27.352 ************************************ 00:26:27.352 20:06:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:27.352 20:06:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:27.352 20:06:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:27.352 20:06:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.352 ************************************ 00:26:27.352 START TEST nvmf_identify_kernel_target 00:26:27.352 ************************************ 00:26:27.352 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:27.352 * Looking for test storage... 00:26:27.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:27.352 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:27.352 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:27.352 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:27.352 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:27.352 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:27.352 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:27.352 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:27.352 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:27.352 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:27.352 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:27.352 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:27.352 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:27.352 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:27.352 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:27.352 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:27.352 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:27.353 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:27.353 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:27.353 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:27.353 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:27.353 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:27.353 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:27.353 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.353 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.353 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.353 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:27.353 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.353 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:26:27.353 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:27.353 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:27.353 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:27.353 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:27.353 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:27.353 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:27.353 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:27.353 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:27.353 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:27.353 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:27.353 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:27.353 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:27.353 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:27.353 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:27.353 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:27.353 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:27.353 20:06:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:27.353 20:06:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:27.353 20:06:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:27.353 20:06:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:26:27.353 20:06:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:33.946 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:33.946 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:33.946 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:33.947 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:33.947 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:33.947 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:33.947 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:33.947 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:33.947 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:33.947 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:33.947 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:33.947 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:33.947 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:33.947 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:33.947 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:33.947 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:33.947 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:33.947 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:33.947 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:33.947 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:33.947 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:33.947 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:33.947 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:33.947 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:33.947 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:33.947 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:26:33.947 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:33.947 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:33.947 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:33.947 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:33.947 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:33.947 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:33.947 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:33.947 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:33.947 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:33.947 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:33.947 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:33.947 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:33.947 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:33.947 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:33.947 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:33.947 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:34.209 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:34.209 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:34.209 20:06:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:34.209 20:06:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:34.209 20:06:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:34.209 20:06:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:34.209 20:06:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:34.209 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:34.209 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.525 ms 00:26:34.209 00:26:34.209 --- 10.0.0.2 ping statistics --- 00:26:34.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:34.209 rtt min/avg/max/mdev = 0.525/0.525/0.525/0.000 ms 00:26:34.209 20:06:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:34.209 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:34.209 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.473 ms 00:26:34.209 00:26:34.209 --- 10.0.0.1 ping statistics --- 00:26:34.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:34.209 rtt min/avg/max/mdev = 0.473/0.473/0.473/0.000 ms 00:26:34.209 20:06:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:34.210 20:06:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:26:34.210 20:06:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:34.210 20:06:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:34.210 20:06:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:34.210 20:06:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:34.210 20:06:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:34.210 20:06:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:34.210 20:06:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:34.471 20:06:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:34.471 20:06:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:34.471 20:06:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:26:34.471 20:06:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:34.471 20:06:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:34.471 20:06:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.471 20:06:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.471 20:06:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:34.471 20:06:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.471 20:06:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:34.471 20:06:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:34.471 20:06:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:34.471 20:06:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:34.471 20:06:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:34.471 20:06:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:34.471 20:06:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:34.471 20:06:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:34.471 20:06:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:34.471 20:06:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:34.471 20:06:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:26:34.471 20:06:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:34.471 20:06:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:34.471 20:06:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:34.471 20:06:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:37.777 Waiting for block devices as requested 00:26:37.777 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:37.777 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:37.777 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:37.777 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:38.037 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:38.037 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:38.037 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:38.298 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:38.298 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:26:38.559 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:38.559 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:38.559 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:38.821 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:38.821 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:38.821 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:38.821 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:39.082 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:39.381 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:39.381 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:39.381 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:39.382 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:26:39.382 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:39.382 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:26:39.382 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:39.382 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:39.382 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:39.382 No valid GPT data, bailing 00:26:39.382 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:39.382 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:26:39.382 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:26:39.382 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:39.382 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:39.382 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:39.382 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:39.382 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:39.382 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:39.382 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:26:39.382 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:39.382 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:26:39.382 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:39.382 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:26:39.382 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:26:39.382 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:26:39.382 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:39.382 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:26:39.382 00:26:39.382 Discovery Log Number of Records 2, Generation counter 2 00:26:39.382 =====Discovery Log Entry 0====== 00:26:39.382 trtype: tcp 00:26:39.382 adrfam: ipv4 00:26:39.382 subtype: current discovery subsystem 00:26:39.382 treq: not specified, sq flow control disable supported 00:26:39.382 portid: 1 00:26:39.382 trsvcid: 4420 00:26:39.382 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:39.382 traddr: 10.0.0.1 00:26:39.382 eflags: none 00:26:39.382 sectype: none 00:26:39.382 =====Discovery Log Entry 1====== 00:26:39.382 trtype: tcp 00:26:39.382 adrfam: ipv4 00:26:39.382 subtype: nvme subsystem 00:26:39.382 treq: not specified, sq flow control disable supported 00:26:39.382 portid: 1 00:26:39.382 trsvcid: 4420 00:26:39.382 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:39.382 traddr: 10.0.0.1 00:26:39.382 eflags: none 00:26:39.382 sectype: none 00:26:39.382 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:39.382 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:39.382 EAL: No free 2048 kB hugepages reported on node 1 00:26:39.663 ===================================================== 00:26:39.663 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:39.663 ===================================================== 00:26:39.663 Controller Capabilities/Features 00:26:39.663 ================================ 00:26:39.663 Vendor ID: 0000 00:26:39.663 Subsystem Vendor ID: 0000 00:26:39.663 Serial Number: 6c55e3f7109b2d97810c 00:26:39.664 Model Number: Linux 00:26:39.664 Firmware Version: 6.7.0-68 00:26:39.664 Recommended Arb Burst: 0 00:26:39.664 IEEE OUI Identifier: 00 00 00 00:26:39.664 Multi-path I/O 00:26:39.664 May have multiple subsystem ports: No 00:26:39.664 May have multiple controllers: No 00:26:39.664 Associated with SR-IOV VF: No 00:26:39.664 Max Data Transfer Size: Unlimited 00:26:39.664 Max Number of Namespaces: 0 00:26:39.664 Max Number of I/O Queues: 1024 00:26:39.664 NVMe Specification Version (VS): 1.3 00:26:39.664 NVMe Specification Version (Identify): 1.3 00:26:39.664 Maximum Queue Entries: 1024 00:26:39.664 Contiguous Queues Required: No 00:26:39.664 Arbitration Mechanisms Supported 00:26:39.664 Weighted Round Robin: Not Supported 00:26:39.664 Vendor Specific: Not Supported 00:26:39.664 Reset Timeout: 7500 ms 00:26:39.664 Doorbell Stride: 4 bytes 00:26:39.664 NVM Subsystem Reset: Not Supported 00:26:39.664 Command Sets Supported 00:26:39.664 NVM Command Set: Supported 00:26:39.664 Boot Partition: Not Supported 00:26:39.664 Memory Page Size Minimum: 4096 bytes 00:26:39.664 Memory Page Size Maximum: 4096 bytes 00:26:39.664 Persistent Memory Region: Not Supported 00:26:39.664 Optional Asynchronous Events Supported 00:26:39.664 Namespace Attribute Notices: Not Supported 00:26:39.664 Firmware Activation Notices: Not Supported 00:26:39.664 ANA Change Notices: Not Supported 00:26:39.664 PLE Aggregate Log Change Notices: Not Supported 00:26:39.664 LBA Status Info Alert Notices: Not Supported 00:26:39.664 EGE Aggregate Log Change Notices: Not Supported 00:26:39.664 Normal NVM Subsystem Shutdown event: Not Supported 00:26:39.664 Zone Descriptor Change Notices: Not Supported 00:26:39.664 Discovery Log Change Notices: Supported 00:26:39.664 Controller Attributes 00:26:39.664 128-bit Host Identifier: Not Supported 00:26:39.664 Non-Operational Permissive Mode: Not Supported 00:26:39.664 NVM Sets: Not Supported 00:26:39.664 Read Recovery Levels: Not Supported 00:26:39.664 Endurance Groups: Not Supported 00:26:39.664 Predictable Latency Mode: Not Supported 00:26:39.664 Traffic Based Keep ALive: Not Supported 00:26:39.664 Namespace Granularity: Not Supported 00:26:39.664 SQ Associations: Not Supported 00:26:39.664 UUID List: Not Supported 00:26:39.664 Multi-Domain Subsystem: Not Supported 00:26:39.664 Fixed Capacity Management: Not Supported 00:26:39.664 Variable Capacity Management: Not Supported 00:26:39.664 Delete Endurance Group: Not Supported 00:26:39.664 Delete NVM Set: Not Supported 00:26:39.664 Extended LBA Formats Supported: Not Supported 00:26:39.664 Flexible Data Placement Supported: Not Supported 00:26:39.664 00:26:39.664 Controller Memory Buffer Support 00:26:39.664 ================================ 00:26:39.664 Supported: No 00:26:39.664 00:26:39.664 Persistent Memory Region Support 00:26:39.664 ================================ 00:26:39.664 Supported: No 00:26:39.664 00:26:39.664 Admin Command Set Attributes 00:26:39.664 ============================ 00:26:39.664 Security Send/Receive: Not Supported 00:26:39.664 Format NVM: Not Supported 00:26:39.664 Firmware Activate/Download: Not Supported 00:26:39.664 Namespace Management: Not Supported 00:26:39.664 Device Self-Test: Not Supported 00:26:39.664 Directives: Not Supported 00:26:39.664 NVMe-MI: Not Supported 00:26:39.664 Virtualization Management: Not Supported 00:26:39.664 Doorbell Buffer Config: Not Supported 00:26:39.664 Get LBA Status Capability: Not Supported 00:26:39.664 Command & Feature Lockdown Capability: Not Supported 00:26:39.664 Abort Command Limit: 1 00:26:39.664 Async Event Request Limit: 1 00:26:39.664 Number of Firmware Slots: N/A 00:26:39.664 Firmware Slot 1 Read-Only: N/A 00:26:39.664 Firmware Activation Without Reset: N/A 00:26:39.664 Multiple Update Detection Support: N/A 00:26:39.664 Firmware Update Granularity: No Information Provided 00:26:39.664 Per-Namespace SMART Log: No 00:26:39.664 Asymmetric Namespace Access Log Page: Not Supported 00:26:39.664 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:39.664 Command Effects Log Page: Not Supported 00:26:39.664 Get Log Page Extended Data: Supported 00:26:39.664 Telemetry Log Pages: Not Supported 00:26:39.664 Persistent Event Log Pages: Not Supported 00:26:39.664 Supported Log Pages Log Page: May Support 00:26:39.664 Commands Supported & Effects Log Page: Not Supported 00:26:39.664 Feature Identifiers & Effects Log Page:May Support 00:26:39.664 NVMe-MI Commands & Effects Log Page: May Support 00:26:39.664 Data Area 4 for Telemetry Log: Not Supported 00:26:39.664 Error Log Page Entries Supported: 1 00:26:39.664 Keep Alive: Not Supported 00:26:39.664 00:26:39.664 NVM Command Set Attributes 00:26:39.664 ========================== 00:26:39.664 Submission Queue Entry Size 00:26:39.664 Max: 1 00:26:39.664 Min: 1 00:26:39.664 Completion Queue Entry Size 00:26:39.664 Max: 1 00:26:39.664 Min: 1 00:26:39.664 Number of Namespaces: 0 00:26:39.664 Compare Command: Not Supported 00:26:39.664 Write Uncorrectable Command: Not Supported 00:26:39.664 Dataset Management Command: Not Supported 00:26:39.664 Write Zeroes Command: Not Supported 00:26:39.664 Set Features Save Field: Not Supported 00:26:39.664 Reservations: Not Supported 00:26:39.664 Timestamp: Not Supported 00:26:39.664 Copy: Not Supported 00:26:39.664 Volatile Write Cache: Not Present 00:26:39.664 Atomic Write Unit (Normal): 1 00:26:39.664 Atomic Write Unit (PFail): 1 00:26:39.664 Atomic Compare & Write Unit: 1 00:26:39.664 Fused Compare & Write: Not Supported 00:26:39.664 Scatter-Gather List 00:26:39.664 SGL Command Set: Supported 00:26:39.664 SGL Keyed: Not Supported 00:26:39.664 SGL Bit Bucket Descriptor: Not Supported 00:26:39.664 SGL Metadata Pointer: Not Supported 00:26:39.664 Oversized SGL: Not Supported 00:26:39.664 SGL Metadata Address: Not Supported 00:26:39.664 SGL Offset: Supported 00:26:39.664 Transport SGL Data Block: Not Supported 00:26:39.664 Replay Protected Memory Block: Not Supported 00:26:39.664 00:26:39.664 Firmware Slot Information 00:26:39.664 ========================= 00:26:39.664 Active slot: 0 00:26:39.664 00:26:39.664 00:26:39.664 Error Log 00:26:39.664 ========= 00:26:39.664 00:26:39.664 Active Namespaces 00:26:39.664 ================= 00:26:39.664 Discovery Log Page 00:26:39.664 ================== 00:26:39.664 Generation Counter: 2 00:26:39.664 Number of Records: 2 00:26:39.664 Record Format: 0 00:26:39.664 00:26:39.664 Discovery Log Entry 0 00:26:39.664 ---------------------- 00:26:39.664 Transport Type: 3 (TCP) 00:26:39.664 Address Family: 1 (IPv4) 00:26:39.664 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:39.664 Entry Flags: 00:26:39.664 Duplicate Returned Information: 0 00:26:39.664 Explicit Persistent Connection Support for Discovery: 0 00:26:39.664 Transport Requirements: 00:26:39.664 Secure Channel: Not Specified 00:26:39.664 Port ID: 1 (0x0001) 00:26:39.664 Controller ID: 65535 (0xffff) 00:26:39.664 Admin Max SQ Size: 32 00:26:39.664 Transport Service Identifier: 4420 00:26:39.664 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:39.664 Transport Address: 10.0.0.1 00:26:39.664 Discovery Log Entry 1 00:26:39.664 ---------------------- 00:26:39.664 Transport Type: 3 (TCP) 00:26:39.664 Address Family: 1 (IPv4) 00:26:39.664 Subsystem Type: 2 (NVM Subsystem) 00:26:39.664 Entry Flags: 00:26:39.664 Duplicate Returned Information: 0 00:26:39.664 Explicit Persistent Connection Support for Discovery: 0 00:26:39.664 Transport Requirements: 00:26:39.664 Secure Channel: Not Specified 00:26:39.664 Port ID: 1 (0x0001) 00:26:39.664 Controller ID: 65535 (0xffff) 00:26:39.664 Admin Max SQ Size: 32 00:26:39.664 Transport Service Identifier: 4420 00:26:39.665 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:39.665 Transport Address: 10.0.0.1 00:26:39.665 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:39.665 EAL: No free 2048 kB hugepages reported on node 1 00:26:39.665 get_feature(0x01) failed 00:26:39.665 get_feature(0x02) failed 00:26:39.665 get_feature(0x04) failed 00:26:39.665 ===================================================== 00:26:39.665 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:39.665 ===================================================== 00:26:39.665 Controller Capabilities/Features 00:26:39.665 ================================ 00:26:39.665 Vendor ID: 0000 00:26:39.665 Subsystem Vendor ID: 0000 00:26:39.665 Serial Number: b653e065cd4e376fe0a9 00:26:39.665 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:39.665 Firmware Version: 6.7.0-68 00:26:39.665 Recommended Arb Burst: 6 00:26:39.665 IEEE OUI Identifier: 00 00 00 00:26:39.665 Multi-path I/O 00:26:39.665 May have multiple subsystem ports: Yes 00:26:39.665 May have multiple controllers: Yes 00:26:39.665 Associated with SR-IOV VF: No 00:26:39.665 Max Data Transfer Size: Unlimited 00:26:39.665 Max Number of Namespaces: 1024 00:26:39.665 Max Number of I/O Queues: 128 00:26:39.665 NVMe Specification Version (VS): 1.3 00:26:39.665 NVMe Specification Version (Identify): 1.3 00:26:39.665 Maximum Queue Entries: 1024 00:26:39.665 Contiguous Queues Required: No 00:26:39.665 Arbitration Mechanisms Supported 00:26:39.665 Weighted Round Robin: Not Supported 00:26:39.665 Vendor Specific: Not Supported 00:26:39.665 Reset Timeout: 7500 ms 00:26:39.665 Doorbell Stride: 4 bytes 00:26:39.665 NVM Subsystem Reset: Not Supported 00:26:39.665 Command Sets Supported 00:26:39.665 NVM Command Set: Supported 00:26:39.665 Boot Partition: Not Supported 00:26:39.665 Memory Page Size Minimum: 4096 bytes 00:26:39.665 Memory Page Size Maximum: 4096 bytes 00:26:39.665 Persistent Memory Region: Not Supported 00:26:39.665 Optional Asynchronous Events Supported 00:26:39.665 Namespace Attribute Notices: Supported 00:26:39.665 Firmware Activation Notices: Not Supported 00:26:39.665 ANA Change Notices: Supported 00:26:39.665 PLE Aggregate Log Change Notices: Not Supported 00:26:39.665 LBA Status Info Alert Notices: Not Supported 00:26:39.665 EGE Aggregate Log Change Notices: Not Supported 00:26:39.665 Normal NVM Subsystem Shutdown event: Not Supported 00:26:39.665 Zone Descriptor Change Notices: Not Supported 00:26:39.665 Discovery Log Change Notices: Not Supported 00:26:39.665 Controller Attributes 00:26:39.665 128-bit Host Identifier: Supported 00:26:39.665 Non-Operational Permissive Mode: Not Supported 00:26:39.665 NVM Sets: Not Supported 00:26:39.665 Read Recovery Levels: Not Supported 00:26:39.665 Endurance Groups: Not Supported 00:26:39.665 Predictable Latency Mode: Not Supported 00:26:39.665 Traffic Based Keep ALive: Supported 00:26:39.665 Namespace Granularity: Not Supported 00:26:39.665 SQ Associations: Not Supported 00:26:39.665 UUID List: Not Supported 00:26:39.665 Multi-Domain Subsystem: Not Supported 00:26:39.665 Fixed Capacity Management: Not Supported 00:26:39.665 Variable Capacity Management: Not Supported 00:26:39.665 Delete Endurance Group: Not Supported 00:26:39.665 Delete NVM Set: Not Supported 00:26:39.665 Extended LBA Formats Supported: Not Supported 00:26:39.665 Flexible Data Placement Supported: Not Supported 00:26:39.665 00:26:39.665 Controller Memory Buffer Support 00:26:39.665 ================================ 00:26:39.665 Supported: No 00:26:39.665 00:26:39.665 Persistent Memory Region Support 00:26:39.665 ================================ 00:26:39.665 Supported: No 00:26:39.665 00:26:39.665 Admin Command Set Attributes 00:26:39.665 ============================ 00:26:39.665 Security Send/Receive: Not Supported 00:26:39.665 Format NVM: Not Supported 00:26:39.665 Firmware Activate/Download: Not Supported 00:26:39.665 Namespace Management: Not Supported 00:26:39.665 Device Self-Test: Not Supported 00:26:39.665 Directives: Not Supported 00:26:39.665 NVMe-MI: Not Supported 00:26:39.665 Virtualization Management: Not Supported 00:26:39.665 Doorbell Buffer Config: Not Supported 00:26:39.665 Get LBA Status Capability: Not Supported 00:26:39.665 Command & Feature Lockdown Capability: Not Supported 00:26:39.665 Abort Command Limit: 4 00:26:39.665 Async Event Request Limit: 4 00:26:39.665 Number of Firmware Slots: N/A 00:26:39.665 Firmware Slot 1 Read-Only: N/A 00:26:39.665 Firmware Activation Without Reset: N/A 00:26:39.665 Multiple Update Detection Support: N/A 00:26:39.665 Firmware Update Granularity: No Information Provided 00:26:39.665 Per-Namespace SMART Log: Yes 00:26:39.665 Asymmetric Namespace Access Log Page: Supported 00:26:39.665 ANA Transition Time : 10 sec 00:26:39.665 00:26:39.665 Asymmetric Namespace Access Capabilities 00:26:39.665 ANA Optimized State : Supported 00:26:39.665 ANA Non-Optimized State : Supported 00:26:39.665 ANA Inaccessible State : Supported 00:26:39.665 ANA Persistent Loss State : Supported 00:26:39.665 ANA Change State : Supported 00:26:39.665 ANAGRPID is not changed : No 00:26:39.665 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:39.665 00:26:39.665 ANA Group Identifier Maximum : 128 00:26:39.665 Number of ANA Group Identifiers : 128 00:26:39.665 Max Number of Allowed Namespaces : 1024 00:26:39.665 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:39.665 Command Effects Log Page: Supported 00:26:39.665 Get Log Page Extended Data: Supported 00:26:39.665 Telemetry Log Pages: Not Supported 00:26:39.665 Persistent Event Log Pages: Not Supported 00:26:39.665 Supported Log Pages Log Page: May Support 00:26:39.665 Commands Supported & Effects Log Page: Not Supported 00:26:39.665 Feature Identifiers & Effects Log Page:May Support 00:26:39.665 NVMe-MI Commands & Effects Log Page: May Support 00:26:39.665 Data Area 4 for Telemetry Log: Not Supported 00:26:39.665 Error Log Page Entries Supported: 128 00:26:39.665 Keep Alive: Supported 00:26:39.665 Keep Alive Granularity: 1000 ms 00:26:39.665 00:26:39.665 NVM Command Set Attributes 00:26:39.665 ========================== 00:26:39.665 Submission Queue Entry Size 00:26:39.665 Max: 64 00:26:39.665 Min: 64 00:26:39.665 Completion Queue Entry Size 00:26:39.665 Max: 16 00:26:39.665 Min: 16 00:26:39.665 Number of Namespaces: 1024 00:26:39.665 Compare Command: Not Supported 00:26:39.665 Write Uncorrectable Command: Not Supported 00:26:39.665 Dataset Management Command: Supported 00:26:39.665 Write Zeroes Command: Supported 00:26:39.665 Set Features Save Field: Not Supported 00:26:39.665 Reservations: Not Supported 00:26:39.665 Timestamp: Not Supported 00:26:39.665 Copy: Not Supported 00:26:39.665 Volatile Write Cache: Present 00:26:39.665 Atomic Write Unit (Normal): 1 00:26:39.665 Atomic Write Unit (PFail): 1 00:26:39.665 Atomic Compare & Write Unit: 1 00:26:39.665 Fused Compare & Write: Not Supported 00:26:39.665 Scatter-Gather List 00:26:39.665 SGL Command Set: Supported 00:26:39.665 SGL Keyed: Not Supported 00:26:39.665 SGL Bit Bucket Descriptor: Not Supported 00:26:39.665 SGL Metadata Pointer: Not Supported 00:26:39.665 Oversized SGL: Not Supported 00:26:39.665 SGL Metadata Address: Not Supported 00:26:39.665 SGL Offset: Supported 00:26:39.665 Transport SGL Data Block: Not Supported 00:26:39.665 Replay Protected Memory Block: Not Supported 00:26:39.665 00:26:39.665 Firmware Slot Information 00:26:39.665 ========================= 00:26:39.665 Active slot: 0 00:26:39.665 00:26:39.665 Asymmetric Namespace Access 00:26:39.665 =========================== 00:26:39.665 Change Count : 0 00:26:39.665 Number of ANA Group Descriptors : 1 00:26:39.666 ANA Group Descriptor : 0 00:26:39.666 ANA Group ID : 1 00:26:39.666 Number of NSID Values : 1 00:26:39.666 Change Count : 0 00:26:39.666 ANA State : 1 00:26:39.666 Namespace Identifier : 1 00:26:39.666 00:26:39.666 Commands Supported and Effects 00:26:39.666 ============================== 00:26:39.666 Admin Commands 00:26:39.666 -------------- 00:26:39.666 Get Log Page (02h): Supported 00:26:39.666 Identify (06h): Supported 00:26:39.666 Abort (08h): Supported 00:26:39.666 Set Features (09h): Supported 00:26:39.666 Get Features (0Ah): Supported 00:26:39.666 Asynchronous Event Request (0Ch): Supported 00:26:39.666 Keep Alive (18h): Supported 00:26:39.666 I/O Commands 00:26:39.666 ------------ 00:26:39.666 Flush (00h): Supported 00:26:39.666 Write (01h): Supported LBA-Change 00:26:39.666 Read (02h): Supported 00:26:39.666 Write Zeroes (08h): Supported LBA-Change 00:26:39.666 Dataset Management (09h): Supported 00:26:39.666 00:26:39.666 Error Log 00:26:39.666 ========= 00:26:39.666 Entry: 0 00:26:39.666 Error Count: 0x3 00:26:39.666 Submission Queue Id: 0x0 00:26:39.666 Command Id: 0x5 00:26:39.666 Phase Bit: 0 00:26:39.666 Status Code: 0x2 00:26:39.666 Status Code Type: 0x0 00:26:39.666 Do Not Retry: 1 00:26:39.666 Error Location: 0x28 00:26:39.666 LBA: 0x0 00:26:39.666 Namespace: 0x0 00:26:39.666 Vendor Log Page: 0x0 00:26:39.666 ----------- 00:26:39.666 Entry: 1 00:26:39.666 Error Count: 0x2 00:26:39.666 Submission Queue Id: 0x0 00:26:39.666 Command Id: 0x5 00:26:39.666 Phase Bit: 0 00:26:39.666 Status Code: 0x2 00:26:39.666 Status Code Type: 0x0 00:26:39.666 Do Not Retry: 1 00:26:39.666 Error Location: 0x28 00:26:39.666 LBA: 0x0 00:26:39.666 Namespace: 0x0 00:26:39.666 Vendor Log Page: 0x0 00:26:39.666 ----------- 00:26:39.666 Entry: 2 00:26:39.666 Error Count: 0x1 00:26:39.666 Submission Queue Id: 0x0 00:26:39.666 Command Id: 0x4 00:26:39.666 Phase Bit: 0 00:26:39.666 Status Code: 0x2 00:26:39.666 Status Code Type: 0x0 00:26:39.666 Do Not Retry: 1 00:26:39.666 Error Location: 0x28 00:26:39.666 LBA: 0x0 00:26:39.666 Namespace: 0x0 00:26:39.666 Vendor Log Page: 0x0 00:26:39.666 00:26:39.666 Number of Queues 00:26:39.666 ================ 00:26:39.666 Number of I/O Submission Queues: 128 00:26:39.666 Number of I/O Completion Queues: 128 00:26:39.666 00:26:39.666 ZNS Specific Controller Data 00:26:39.666 ============================ 00:26:39.666 Zone Append Size Limit: 0 00:26:39.666 00:26:39.666 00:26:39.666 Active Namespaces 00:26:39.666 ================= 00:26:39.666 get_feature(0x05) failed 00:26:39.666 Namespace ID:1 00:26:39.666 Command Set Identifier: NVM (00h) 00:26:39.666 Deallocate: Supported 00:26:39.666 Deallocated/Unwritten Error: Not Supported 00:26:39.666 Deallocated Read Value: Unknown 00:26:39.666 Deallocate in Write Zeroes: Not Supported 00:26:39.666 Deallocated Guard Field: 0xFFFF 00:26:39.666 Flush: Supported 00:26:39.666 Reservation: Not Supported 00:26:39.666 Namespace Sharing Capabilities: Multiple Controllers 00:26:39.666 Size (in LBAs): 3750748848 (1788GiB) 00:26:39.666 Capacity (in LBAs): 3750748848 (1788GiB) 00:26:39.666 Utilization (in LBAs): 3750748848 (1788GiB) 00:26:39.666 UUID: 8bb7b815-433e-4dc5-926e-9ebac4dbc077 00:26:39.666 Thin Provisioning: Not Supported 00:26:39.666 Per-NS Atomic Units: Yes 00:26:39.666 Atomic Write Unit (Normal): 8 00:26:39.666 Atomic Write Unit (PFail): 8 00:26:39.666 Preferred Write Granularity: 8 00:26:39.666 Atomic Compare & Write Unit: 8 00:26:39.666 Atomic Boundary Size (Normal): 0 00:26:39.666 Atomic Boundary Size (PFail): 0 00:26:39.666 Atomic Boundary Offset: 0 00:26:39.666 NGUID/EUI64 Never Reused: No 00:26:39.666 ANA group ID: 1 00:26:39.666 Namespace Write Protected: No 00:26:39.666 Number of LBA Formats: 1 00:26:39.666 Current LBA Format: LBA Format #00 00:26:39.666 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:39.666 00:26:39.666 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:39.666 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:39.666 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:26:39.666 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:39.666 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:26:39.666 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:39.666 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:39.666 rmmod nvme_tcp 00:26:39.666 rmmod nvme_fabrics 00:26:39.666 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:39.666 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:26:39.666 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:26:39.666 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:26:39.666 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:39.666 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:39.666 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:39.666 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:39.666 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:39.666 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.666 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:39.666 20:06:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:42.213 20:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:42.213 20:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:42.213 20:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:42.213 20:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:26:42.213 20:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:42.213 20:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:42.213 20:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:42.213 20:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:42.213 20:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:42.213 20:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:42.213 20:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:45.520 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:26:45.520 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:26:45.520 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:26:45.520 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:26:45.520 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:26:45.520 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:26:45.520 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:26:45.520 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:26:45.520 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:26:45.520 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:26:45.520 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:26:45.520 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:26:45.520 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:26:45.520 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:26:45.520 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:26:45.520 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:26:45.520 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:26:45.781 00:26:45.781 real 0m18.705s 00:26:45.781 user 0m5.026s 00:26:45.781 sys 0m10.622s 00:26:45.781 20:06:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:45.781 20:06:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:45.781 ************************************ 00:26:45.781 END TEST nvmf_identify_kernel_target 00:26:45.781 ************************************ 00:26:45.781 20:06:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:45.781 20:06:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:45.781 20:06:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:45.781 20:06:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.781 ************************************ 00:26:45.781 START TEST nvmf_auth_host 00:26:45.781 ************************************ 00:26:45.781 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:46.042 * Looking for test storage... 00:26:46.042 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:46.042 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:46.042 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:46.042 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:46.042 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:46.042 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:46.042 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:46.042 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:46.042 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:46.042 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:46.042 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:46.042 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:46.042 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:46.042 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:46.042 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:46.042 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:46.042 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:46.042 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:46.042 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:46.042 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:46.042 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:46.042 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:46.042 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:46.042 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.042 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.042 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.042 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:46.042 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.042 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:26:46.042 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:46.042 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:46.042 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:46.042 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:46.043 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:46.043 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:46.043 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:46.043 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:46.043 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:46.043 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:46.043 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:46.043 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:46.043 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:46.043 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:46.043 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:46.043 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:46.043 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:46.043 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:46.043 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:46.043 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:46.043 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:46.043 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:46.043 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:46.043 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:46.043 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:46.043 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:46.043 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:46.043 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:26:46.043 20:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:52.630 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:52.630 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:52.630 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:52.630 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:52.630 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:52.892 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:52.892 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:52.892 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:52.892 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:52.892 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.512 ms 00:26:52.892 00:26:52.892 --- 10.0.0.2 ping statistics --- 00:26:52.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:52.892 rtt min/avg/max/mdev = 0.512/0.512/0.512/0.000 ms 00:26:52.892 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:52.892 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:52.892 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.396 ms 00:26:52.892 00:26:52.892 --- 10.0.0.1 ping statistics --- 00:26:52.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:52.892 rtt min/avg/max/mdev = 0.396/0.396/0.396/0.000 ms 00:26:52.892 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:52.892 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:26:52.892 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:52.892 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:52.892 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:52.892 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:52.892 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:52.892 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:52.892 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:52.892 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:52.892 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:52.892 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:52.892 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.892 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3826788 00:26:52.892 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3826788 00:26:52.892 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 3826788 ']' 00:26:52.892 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:52.892 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:52.892 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:52.892 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:52.892 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:52.892 20:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=fbd9a61b5eda1fb352569ad7079c6b29 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.I4i 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key fbd9a61b5eda1fb352569ad7079c6b29 0 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 fbd9a61b5eda1fb352569ad7079c6b29 0 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=fbd9a61b5eda1fb352569ad7079c6b29 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.I4i 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.I4i 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.I4i 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=743bfb2fc0f97e0705834445c7ca7832eba31aad2fb386d72f7ca80fa7c33dff 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.iky 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 743bfb2fc0f97e0705834445c7ca7832eba31aad2fb386d72f7ca80fa7c33dff 3 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 743bfb2fc0f97e0705834445c7ca7832eba31aad2fb386d72f7ca80fa7c33dff 3 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=743bfb2fc0f97e0705834445c7ca7832eba31aad2fb386d72f7ca80fa7c33dff 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.iky 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.iky 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.iky 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=68f6838c9ff8c9f2ba13bca82844d6e0bfa1c0c36e4fe1f8 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Wgl 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 68f6838c9ff8c9f2ba13bca82844d6e0bfa1c0c36e4fe1f8 0 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 68f6838c9ff8c9f2ba13bca82844d6e0bfa1c0c36e4fe1f8 0 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=68f6838c9ff8c9f2ba13bca82844d6e0bfa1c0c36e4fe1f8 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:53.836 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Wgl 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Wgl 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Wgl 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f54741187106d52a2ed6c696836a8e56b86c274ae701239d 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.zhm 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f54741187106d52a2ed6c696836a8e56b86c274ae701239d 2 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f54741187106d52a2ed6c696836a8e56b86c274ae701239d 2 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f54741187106d52a2ed6c696836a8e56b86c274ae701239d 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.zhm 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.zhm 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.zhm 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=805332fbb75e3f22f3625d21c82f9472 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.x7r 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 805332fbb75e3f22f3625d21c82f9472 1 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 805332fbb75e3f22f3625d21c82f9472 1 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=805332fbb75e3f22f3625d21c82f9472 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.x7r 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.x7r 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.x7r 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=32e8ad8f0546a1b25817854f8d557f53 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.4KP 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 32e8ad8f0546a1b25817854f8d557f53 1 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 32e8ad8f0546a1b25817854f8d557f53 1 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=32e8ad8f0546a1b25817854f8d557f53 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.4KP 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.4KP 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.4KP 00:26:54.098 20:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:54.098 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:54.098 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:54.098 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:54.098 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:26:54.098 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:54.098 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:54.098 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=fe84c77d153c052950081f13f0f15d30c64c7d48ab655b4e 00:26:54.098 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:26:54.098 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.9QH 00:26:54.098 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key fe84c77d153c052950081f13f0f15d30c64c7d48ab655b4e 2 00:26:54.098 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 fe84c77d153c052950081f13f0f15d30c64c7d48ab655b4e 2 00:26:54.098 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:54.098 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:54.098 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=fe84c77d153c052950081f13f0f15d30c64c7d48ab655b4e 00:26:54.098 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:26:54.098 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.9QH 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.9QH 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.9QH 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2004440219176c8f078edc176d57d0a5 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Qm4 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2004440219176c8f078edc176d57d0a5 0 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2004440219176c8f078edc176d57d0a5 0 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2004440219176c8f078edc176d57d0a5 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Qm4 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Qm4 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Qm4 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5b311881a51a7d37a21039e829dccb5929b03581bbd043cd9af219b1e0afaf3a 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.lTT 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5b311881a51a7d37a21039e829dccb5929b03581bbd043cd9af219b1e0afaf3a 3 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5b311881a51a7d37a21039e829dccb5929b03581bbd043cd9af219b1e0afaf3a 3 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5b311881a51a7d37a21039e829dccb5929b03581bbd043cd9af219b1e0afaf3a 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.lTT 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.lTT 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.lTT 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3826788 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 3826788 ']' 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:54.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:54.360 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.I4i 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.iky ]] 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.iky 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Wgl 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.zhm ]] 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zhm 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.x7r 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.4KP ]] 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4KP 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.9QH 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Qm4 ]] 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Qm4 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.lTT 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:54.621 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:54.622 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:54.622 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:54.622 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:54.622 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:54.622 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:54.622 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:54.622 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:26:54.622 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:54.622 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:54.622 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:54.622 20:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:57.929 Waiting for block devices as requested 00:26:57.929 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:57.929 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:57.929 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:57.929 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:58.190 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:58.190 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:58.190 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:58.450 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:58.450 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:26:58.711 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:58.711 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:58.711 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:58.971 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:58.971 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:58.971 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:59.232 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:59.232 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:00.296 20:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:00.296 20:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:00.296 20:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:00.296 20:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:00.296 20:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:00.296 20:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:00.296 20:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:00.296 20:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:00.296 20:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:00.296 No valid GPT data, bailing 00:27:00.296 20:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:00.296 20:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:27:00.296 20:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:27:00.296 20:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:00.296 20:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:00.296 20:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:00.296 20:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:00.296 20:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:00.296 20:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:00.296 20:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:27:00.296 20:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:00.296 20:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:27:00.296 20:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:00.296 20:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:27:00.296 20:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:27:00.296 20:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:27:00.296 20:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:00.296 00:27:00.296 Discovery Log Number of Records 2, Generation counter 2 00:27:00.296 =====Discovery Log Entry 0====== 00:27:00.296 trtype: tcp 00:27:00.296 adrfam: ipv4 00:27:00.296 subtype: current discovery subsystem 00:27:00.296 treq: not specified, sq flow control disable supported 00:27:00.296 portid: 1 00:27:00.296 trsvcid: 4420 00:27:00.296 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:00.296 traddr: 10.0.0.1 00:27:00.296 eflags: none 00:27:00.296 sectype: none 00:27:00.296 =====Discovery Log Entry 1====== 00:27:00.296 trtype: tcp 00:27:00.296 adrfam: ipv4 00:27:00.296 subtype: nvme subsystem 00:27:00.296 treq: not specified, sq flow control disable supported 00:27:00.296 portid: 1 00:27:00.296 trsvcid: 4420 00:27:00.296 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:00.296 traddr: 10.0.0.1 00:27:00.296 eflags: none 00:27:00.296 sectype: none 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmNjgzOGM5ZmY4YzlmMmJhMTNiY2E4Mjg0NGQ2ZTBiZmExYzBjMzZlNGZlMWY4KWhj8g==: 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmNjgzOGM5ZmY4YzlmMmJhMTNiY2E4Mjg0NGQ2ZTBiZmExYzBjMzZlNGZlMWY4KWhj8g==: 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: ]] 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.296 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.558 nvme0n1 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJkOWE2MWI1ZWRhMWZiMzUyNTY5YWQ3MDc5YzZiMjmmnQ7g: 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJkOWE2MWI1ZWRhMWZiMzUyNTY5YWQ3MDc5YzZiMjmmnQ7g: 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: ]] 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.558 nvme0n1 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.558 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmNjgzOGM5ZmY4YzlmMmJhMTNiY2E4Mjg0NGQ2ZTBiZmExYzBjMzZlNGZlMWY4KWhj8g==: 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmNjgzOGM5ZmY4YzlmMmJhMTNiY2E4Mjg0NGQ2ZTBiZmExYzBjMzZlNGZlMWY4KWhj8g==: 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: ]] 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.820 nvme0n1 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.820 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODA1MzMyZmJiNzVlM2YyMmYzNjI1ZDIxYzgyZjk0NzJ4ZX58: 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODA1MzMyZmJiNzVlM2YyMmYzNjI1ZDIxYzgyZjk0NzJ4ZX58: 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: ]] 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.082 nvme0n1 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.082 20:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.082 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.082 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.082 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:01.082 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.082 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:01.082 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:01.082 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:01.082 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmU4NGM3N2QxNTNjMDUyOTUwMDgxZjEzZjBmMTVkMzBjNjRjN2Q0OGFiNjU1YjRlZBhBKA==: 00:27:01.082 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: 00:27:01.082 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:01.083 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:01.083 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmU4NGM3N2QxNTNjMDUyOTUwMDgxZjEzZjBmMTVkMzBjNjRjN2Q0OGFiNjU1YjRlZBhBKA==: 00:27:01.083 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: ]] 00:27:01.083 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: 00:27:01.083 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:01.083 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.083 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:01.083 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:01.083 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:01.083 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.083 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:01.083 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.083 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.344 nvme0n1 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWIzMTE4ODFhNTFhN2QzN2EyMTAzOWU4MjlkY2NiNTkyOWIwMzU4MWJiZDA0M2NkOWFmMjE5YjFlMGFmYWYzYedMW+0=: 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWIzMTE4ODFhNTFhN2QzN2EyMTAzOWU4MjlkY2NiNTkyOWIwMzU4MWJiZDA0M2NkOWFmMjE5YjFlMGFmYWYzYedMW+0=: 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.344 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.605 nvme0n1 00:27:01.605 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.605 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.605 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.605 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.605 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.605 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.605 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.605 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.605 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.605 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.605 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.605 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:01.605 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.605 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:01.605 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.605 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:01.605 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:01.605 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:01.605 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJkOWE2MWI1ZWRhMWZiMzUyNTY5YWQ3MDc5YzZiMjmmnQ7g: 00:27:01.606 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: 00:27:01.606 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:01.606 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:01.606 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJkOWE2MWI1ZWRhMWZiMzUyNTY5YWQ3MDc5YzZiMjmmnQ7g: 00:27:01.606 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: ]] 00:27:01.606 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: 00:27:01.606 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:01.606 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.606 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:01.606 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:01.606 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:01.606 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.606 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:01.606 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.606 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.606 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.606 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.606 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:01.606 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:01.606 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:01.606 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.606 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.606 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:01.606 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.606 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:01.606 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:01.606 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:01.606 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:01.606 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.606 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.867 nvme0n1 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmNjgzOGM5ZmY4YzlmMmJhMTNiY2E4Mjg0NGQ2ZTBiZmExYzBjMzZlNGZlMWY4KWhj8g==: 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmNjgzOGM5ZmY4YzlmMmJhMTNiY2E4Mjg0NGQ2ZTBiZmExYzBjMzZlNGZlMWY4KWhj8g==: 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: ]] 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:01.867 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.868 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.129 nvme0n1 00:27:02.129 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.129 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.129 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.129 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.129 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.129 20:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODA1MzMyZmJiNzVlM2YyMmYzNjI1ZDIxYzgyZjk0NzJ4ZX58: 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODA1MzMyZmJiNzVlM2YyMmYzNjI1ZDIxYzgyZjk0NzJ4ZX58: 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: ]] 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.129 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.391 nvme0n1 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmU4NGM3N2QxNTNjMDUyOTUwMDgxZjEzZjBmMTVkMzBjNjRjN2Q0OGFiNjU1YjRlZBhBKA==: 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmU4NGM3N2QxNTNjMDUyOTUwMDgxZjEzZjBmMTVkMzBjNjRjN2Q0OGFiNjU1YjRlZBhBKA==: 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: ]] 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:02.391 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:02.392 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:02.392 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.392 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.653 nvme0n1 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWIzMTE4ODFhNTFhN2QzN2EyMTAzOWU4MjlkY2NiNTkyOWIwMzU4MWJiZDA0M2NkOWFmMjE5YjFlMGFmYWYzYedMW+0=: 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWIzMTE4ODFhNTFhN2QzN2EyMTAzOWU4MjlkY2NiNTkyOWIwMzU4MWJiZDA0M2NkOWFmMjE5YjFlMGFmYWYzYedMW+0=: 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.653 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:02.654 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:02.654 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:02.654 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:02.654 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.654 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.915 nvme0n1 00:27:02.915 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.915 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.915 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.915 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.915 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.915 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.915 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.915 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.915 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.915 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.915 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.915 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:02.915 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.915 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:02.915 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.915 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:02.915 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:02.915 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:02.915 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJkOWE2MWI1ZWRhMWZiMzUyNTY5YWQ3MDc5YzZiMjmmnQ7g: 00:27:02.915 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: 00:27:02.915 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:02.915 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:02.915 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJkOWE2MWI1ZWRhMWZiMzUyNTY5YWQ3MDc5YzZiMjmmnQ7g: 00:27:02.915 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: ]] 00:27:02.915 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: 00:27:02.915 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:02.915 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.915 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:02.915 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:02.915 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:02.915 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.915 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:02.915 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.915 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.915 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.915 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.915 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:02.915 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:03.176 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:03.176 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.176 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.176 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:03.176 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.176 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:03.176 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:03.176 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:03.176 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:03.176 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.176 20:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.438 nvme0n1 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmNjgzOGM5ZmY4YzlmMmJhMTNiY2E4Mjg0NGQ2ZTBiZmExYzBjMzZlNGZlMWY4KWhj8g==: 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmNjgzOGM5ZmY4YzlmMmJhMTNiY2E4Mjg0NGQ2ZTBiZmExYzBjMzZlNGZlMWY4KWhj8g==: 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: ]] 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.438 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.700 nvme0n1 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODA1MzMyZmJiNzVlM2YyMmYzNjI1ZDIxYzgyZjk0NzJ4ZX58: 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODA1MzMyZmJiNzVlM2YyMmYzNjI1ZDIxYzgyZjk0NzJ4ZX58: 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: ]] 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.700 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.961 nvme0n1 00:27:03.961 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.961 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.961 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.961 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.961 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.961 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmU4NGM3N2QxNTNjMDUyOTUwMDgxZjEzZjBmMTVkMzBjNjRjN2Q0OGFiNjU1YjRlZBhBKA==: 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmU4NGM3N2QxNTNjMDUyOTUwMDgxZjEzZjBmMTVkMzBjNjRjN2Q0OGFiNjU1YjRlZBhBKA==: 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: ]] 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.222 20:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.483 nvme0n1 00:27:04.483 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.483 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.483 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.483 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.483 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.483 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.483 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.483 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.483 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.483 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.483 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.484 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.484 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:04.484 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.484 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:04.484 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:04.484 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:04.484 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWIzMTE4ODFhNTFhN2QzN2EyMTAzOWU4MjlkY2NiNTkyOWIwMzU4MWJiZDA0M2NkOWFmMjE5YjFlMGFmYWYzYedMW+0=: 00:27:04.484 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:04.484 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:04.484 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:04.484 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWIzMTE4ODFhNTFhN2QzN2EyMTAzOWU4MjlkY2NiNTkyOWIwMzU4MWJiZDA0M2NkOWFmMjE5YjFlMGFmYWYzYedMW+0=: 00:27:04.484 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:04.484 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:04.484 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.484 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:04.484 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:04.484 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:04.484 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.484 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:04.484 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.484 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.484 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.484 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.484 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:04.484 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:04.484 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:04.484 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.484 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.484 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:04.484 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.484 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:04.484 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:04.484 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:04.484 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:04.484 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.484 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.745 nvme0n1 00:27:04.745 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.745 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.745 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.745 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.745 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.745 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.745 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.745 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.745 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.745 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.745 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.745 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:04.745 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.745 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:04.745 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.745 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:04.745 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:04.745 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:04.745 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJkOWE2MWI1ZWRhMWZiMzUyNTY5YWQ3MDc5YzZiMjmmnQ7g: 00:27:04.745 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: 00:27:04.745 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:04.745 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:04.745 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJkOWE2MWI1ZWRhMWZiMzUyNTY5YWQ3MDc5YzZiMjmmnQ7g: 00:27:04.745 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: ]] 00:27:04.745 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: 00:27:04.745 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:04.745 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.745 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:04.745 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:04.745 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:04.746 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.746 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:04.746 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.746 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.746 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.746 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.746 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:04.746 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:04.746 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:04.746 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.746 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.746 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:04.746 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.746 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:04.746 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:04.746 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:04.746 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:04.746 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.746 20:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.318 nvme0n1 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmNjgzOGM5ZmY4YzlmMmJhMTNiY2E4Mjg0NGQ2ZTBiZmExYzBjMzZlNGZlMWY4KWhj8g==: 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmNjgzOGM5ZmY4YzlmMmJhMTNiY2E4Mjg0NGQ2ZTBiZmExYzBjMzZlNGZlMWY4KWhj8g==: 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: ]] 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.318 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.894 nvme0n1 00:27:05.894 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.894 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.894 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.894 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.894 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.894 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.894 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.894 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.894 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.894 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.894 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.894 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.894 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:05.894 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.894 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:05.894 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:05.894 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:05.894 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODA1MzMyZmJiNzVlM2YyMmYzNjI1ZDIxYzgyZjk0NzJ4ZX58: 00:27:05.894 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: 00:27:05.894 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:05.894 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:05.895 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODA1MzMyZmJiNzVlM2YyMmYzNjI1ZDIxYzgyZjk0NzJ4ZX58: 00:27:05.895 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: ]] 00:27:05.895 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: 00:27:05.895 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:05.895 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.895 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:05.895 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:05.895 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:05.895 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.895 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:05.895 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.895 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.895 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.895 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.895 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:05.895 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:05.895 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:05.895 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.895 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.895 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:05.895 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.895 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:05.895 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:05.895 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:05.895 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:05.895 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.895 20:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.467 nvme0n1 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmU4NGM3N2QxNTNjMDUyOTUwMDgxZjEzZjBmMTVkMzBjNjRjN2Q0OGFiNjU1YjRlZBhBKA==: 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmU4NGM3N2QxNTNjMDUyOTUwMDgxZjEzZjBmMTVkMzBjNjRjN2Q0OGFiNjU1YjRlZBhBKA==: 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: ]] 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.467 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.039 nvme0n1 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWIzMTE4ODFhNTFhN2QzN2EyMTAzOWU4MjlkY2NiNTkyOWIwMzU4MWJiZDA0M2NkOWFmMjE5YjFlMGFmYWYzYedMW+0=: 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWIzMTE4ODFhNTFhN2QzN2EyMTAzOWU4MjlkY2NiNTkyOWIwMzU4MWJiZDA0M2NkOWFmMjE5YjFlMGFmYWYzYedMW+0=: 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.039 20:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.612 nvme0n1 00:27:07.612 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.612 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.612 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.612 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.612 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.612 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.612 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.612 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.612 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.612 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.612 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.612 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:07.612 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.612 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:07.612 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.612 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:07.612 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:07.612 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:07.612 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJkOWE2MWI1ZWRhMWZiMzUyNTY5YWQ3MDc5YzZiMjmmnQ7g: 00:27:07.613 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: 00:27:07.613 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:07.613 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:07.613 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJkOWE2MWI1ZWRhMWZiMzUyNTY5YWQ3MDc5YzZiMjmmnQ7g: 00:27:07.613 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: ]] 00:27:07.613 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: 00:27:07.613 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:07.613 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.613 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:07.613 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:07.613 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:07.613 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.613 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:07.613 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.613 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.613 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.613 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.613 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:07.613 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:07.613 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:07.613 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.613 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.613 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:07.613 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.613 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:07.613 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:07.613 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:07.613 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:07.613 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.613 20:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.565 nvme0n1 00:27:08.565 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.565 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.565 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.565 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.565 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.565 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.565 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.565 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.565 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.565 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.565 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.565 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.565 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:08.566 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.566 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:08.566 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:08.566 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:08.566 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmNjgzOGM5ZmY4YzlmMmJhMTNiY2E4Mjg0NGQ2ZTBiZmExYzBjMzZlNGZlMWY4KWhj8g==: 00:27:08.566 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: 00:27:08.566 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:08.566 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:08.566 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmNjgzOGM5ZmY4YzlmMmJhMTNiY2E4Mjg0NGQ2ZTBiZmExYzBjMzZlNGZlMWY4KWhj8g==: 00:27:08.566 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: ]] 00:27:08.566 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: 00:27:08.566 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:08.566 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.566 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:08.566 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:08.566 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:08.566 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.566 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:08.566 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.566 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.566 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.566 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.566 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:08.566 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:08.566 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:08.566 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.566 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.566 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:08.566 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.566 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:08.566 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:08.566 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:08.566 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:08.566 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.566 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.138 nvme0n1 00:27:09.138 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.138 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.138 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.138 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.138 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.138 20:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODA1MzMyZmJiNzVlM2YyMmYzNjI1ZDIxYzgyZjk0NzJ4ZX58: 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODA1MzMyZmJiNzVlM2YyMmYzNjI1ZDIxYzgyZjk0NzJ4ZX58: 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: ]] 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.138 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.081 nvme0n1 00:27:10.081 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.081 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.081 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.081 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.081 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.081 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.081 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.081 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.081 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.081 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.081 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.081 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.081 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:10.081 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.081 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:10.081 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:10.081 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:10.081 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmU4NGM3N2QxNTNjMDUyOTUwMDgxZjEzZjBmMTVkMzBjNjRjN2Q0OGFiNjU1YjRlZBhBKA==: 00:27:10.081 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: 00:27:10.081 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:10.081 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:10.081 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmU4NGM3N2QxNTNjMDUyOTUwMDgxZjEzZjBmMTVkMzBjNjRjN2Q0OGFiNjU1YjRlZBhBKA==: 00:27:10.081 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: ]] 00:27:10.081 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: 00:27:10.081 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:10.081 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.081 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:10.081 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:10.081 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:10.081 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.081 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:10.081 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.081 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.081 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.081 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.081 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:10.081 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:10.081 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:10.081 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.082 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.082 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:10.082 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.082 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:10.082 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:10.082 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:10.082 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:10.082 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.082 20:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.025 nvme0n1 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWIzMTE4ODFhNTFhN2QzN2EyMTAzOWU4MjlkY2NiNTkyOWIwMzU4MWJiZDA0M2NkOWFmMjE5YjFlMGFmYWYzYedMW+0=: 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWIzMTE4ODFhNTFhN2QzN2EyMTAzOWU4MjlkY2NiNTkyOWIwMzU4MWJiZDA0M2NkOWFmMjE5YjFlMGFmYWYzYedMW+0=: 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.025 20:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.596 nvme0n1 00:27:11.596 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.596 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.596 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.596 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.596 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.596 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.596 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.596 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.596 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.596 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.857 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.857 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:11.857 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:11.857 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.857 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:11.857 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.857 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:11.857 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:11.857 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:11.857 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJkOWE2MWI1ZWRhMWZiMzUyNTY5YWQ3MDc5YzZiMjmmnQ7g: 00:27:11.857 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: 00:27:11.857 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:11.857 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:11.857 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJkOWE2MWI1ZWRhMWZiMzUyNTY5YWQ3MDc5YzZiMjmmnQ7g: 00:27:11.857 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: ]] 00:27:11.857 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: 00:27:11.857 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:11.857 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.857 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:11.857 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:11.857 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:11.857 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.857 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:11.857 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.857 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.857 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.858 nvme0n1 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmNjgzOGM5ZmY4YzlmMmJhMTNiY2E4Mjg0NGQ2ZTBiZmExYzBjMzZlNGZlMWY4KWhj8g==: 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmNjgzOGM5ZmY4YzlmMmJhMTNiY2E4Mjg0NGQ2ZTBiZmExYzBjMzZlNGZlMWY4KWhj8g==: 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: ]] 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.858 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.119 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.119 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.119 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:12.119 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:12.119 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:12.119 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.119 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.119 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:12.119 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.119 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:12.119 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:12.119 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:12.119 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:12.119 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.119 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.119 nvme0n1 00:27:12.119 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.119 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.119 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.119 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.119 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.119 20:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.119 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.119 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.119 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.119 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.119 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.119 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.119 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:12.119 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.119 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:12.119 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:12.119 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:12.119 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODA1MzMyZmJiNzVlM2YyMmYzNjI1ZDIxYzgyZjk0NzJ4ZX58: 00:27:12.119 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: 00:27:12.119 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:12.119 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:12.119 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODA1MzMyZmJiNzVlM2YyMmYzNjI1ZDIxYzgyZjk0NzJ4ZX58: 00:27:12.119 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: ]] 00:27:12.119 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: 00:27:12.119 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:12.119 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.119 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:12.119 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:12.119 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:12.119 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.119 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:12.119 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.119 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.119 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.119 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.119 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:12.119 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:12.119 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:12.119 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.119 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.119 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:12.119 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.119 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:12.119 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:12.120 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:12.120 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:12.120 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.120 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.380 nvme0n1 00:27:12.380 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.380 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.380 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.380 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.380 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.380 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.380 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.380 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.380 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.380 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.380 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.380 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.380 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:12.380 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.380 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:12.380 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:12.380 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:12.380 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmU4NGM3N2QxNTNjMDUyOTUwMDgxZjEzZjBmMTVkMzBjNjRjN2Q0OGFiNjU1YjRlZBhBKA==: 00:27:12.380 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: 00:27:12.380 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:12.380 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:12.380 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmU4NGM3N2QxNTNjMDUyOTUwMDgxZjEzZjBmMTVkMzBjNjRjN2Q0OGFiNjU1YjRlZBhBKA==: 00:27:12.380 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: ]] 00:27:12.381 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: 00:27:12.381 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:12.381 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.381 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:12.381 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:12.381 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:12.381 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.381 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:12.381 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.381 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.381 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.381 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.381 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:12.381 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:12.381 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:12.381 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.381 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.381 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:12.381 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.381 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:12.381 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:12.381 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:12.381 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:12.381 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.381 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.642 nvme0n1 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWIzMTE4ODFhNTFhN2QzN2EyMTAzOWU4MjlkY2NiNTkyOWIwMzU4MWJiZDA0M2NkOWFmMjE5YjFlMGFmYWYzYedMW+0=: 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWIzMTE4ODFhNTFhN2QzN2EyMTAzOWU4MjlkY2NiNTkyOWIwMzU4MWJiZDA0M2NkOWFmMjE5YjFlMGFmYWYzYedMW+0=: 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:12.642 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:12.643 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:12.643 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.643 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.904 nvme0n1 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJkOWE2MWI1ZWRhMWZiMzUyNTY5YWQ3MDc5YzZiMjmmnQ7g: 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJkOWE2MWI1ZWRhMWZiMzUyNTY5YWQ3MDc5YzZiMjmmnQ7g: 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: ]] 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.904 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.166 nvme0n1 00:27:13.166 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.166 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.166 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.166 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.166 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.166 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.166 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.166 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.166 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.166 20:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.166 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.166 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.166 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:13.166 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.166 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:13.166 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:13.166 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:13.166 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmNjgzOGM5ZmY4YzlmMmJhMTNiY2E4Mjg0NGQ2ZTBiZmExYzBjMzZlNGZlMWY4KWhj8g==: 00:27:13.166 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: 00:27:13.166 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:13.166 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:13.166 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmNjgzOGM5ZmY4YzlmMmJhMTNiY2E4Mjg0NGQ2ZTBiZmExYzBjMzZlNGZlMWY4KWhj8g==: 00:27:13.166 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: ]] 00:27:13.166 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: 00:27:13.166 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:13.166 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.166 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:13.166 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:13.166 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:13.166 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.166 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:13.166 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.166 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.166 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.166 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.166 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:13.166 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:13.166 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:13.166 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.166 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.166 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:13.166 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.166 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:13.166 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:13.166 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:13.166 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:13.166 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.166 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.428 nvme0n1 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODA1MzMyZmJiNzVlM2YyMmYzNjI1ZDIxYzgyZjk0NzJ4ZX58: 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODA1MzMyZmJiNzVlM2YyMmYzNjI1ZDIxYzgyZjk0NzJ4ZX58: 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: ]] 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.428 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.689 nvme0n1 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmU4NGM3N2QxNTNjMDUyOTUwMDgxZjEzZjBmMTVkMzBjNjRjN2Q0OGFiNjU1YjRlZBhBKA==: 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmU4NGM3N2QxNTNjMDUyOTUwMDgxZjEzZjBmMTVkMzBjNjRjN2Q0OGFiNjU1YjRlZBhBKA==: 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: ]] 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.689 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:13.690 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:13.690 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:13.690 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:13.690 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.690 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.951 nvme0n1 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWIzMTE4ODFhNTFhN2QzN2EyMTAzOWU4MjlkY2NiNTkyOWIwMzU4MWJiZDA0M2NkOWFmMjE5YjFlMGFmYWYzYedMW+0=: 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWIzMTE4ODFhNTFhN2QzN2EyMTAzOWU4MjlkY2NiNTkyOWIwMzU4MWJiZDA0M2NkOWFmMjE5YjFlMGFmYWYzYedMW+0=: 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.951 20:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.212 nvme0n1 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJkOWE2MWI1ZWRhMWZiMzUyNTY5YWQ3MDc5YzZiMjmmnQ7g: 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJkOWE2MWI1ZWRhMWZiMzUyNTY5YWQ3MDc5YzZiMjmmnQ7g: 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: ]] 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:14.212 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:14.213 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:14.213 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:14.213 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.213 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.473 nvme0n1 00:27:14.473 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.473 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.473 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.473 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.473 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.473 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.734 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.734 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.734 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.734 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.734 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.734 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.734 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:14.734 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.734 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:14.734 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:14.734 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:14.734 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmNjgzOGM5ZmY4YzlmMmJhMTNiY2E4Mjg0NGQ2ZTBiZmExYzBjMzZlNGZlMWY4KWhj8g==: 00:27:14.734 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: 00:27:14.734 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:14.734 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:14.734 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmNjgzOGM5ZmY4YzlmMmJhMTNiY2E4Mjg0NGQ2ZTBiZmExYzBjMzZlNGZlMWY4KWhj8g==: 00:27:14.734 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: ]] 00:27:14.734 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: 00:27:14.735 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:14.735 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.735 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:14.735 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:14.735 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:14.735 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.735 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:14.735 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.735 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.735 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.735 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.735 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:14.735 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:14.735 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:14.735 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.735 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.735 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:14.735 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.735 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:14.735 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:14.735 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:14.735 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:14.735 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.735 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.996 nvme0n1 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODA1MzMyZmJiNzVlM2YyMmYzNjI1ZDIxYzgyZjk0NzJ4ZX58: 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODA1MzMyZmJiNzVlM2YyMmYzNjI1ZDIxYzgyZjk0NzJ4ZX58: 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: ]] 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.996 20:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.258 nvme0n1 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmU4NGM3N2QxNTNjMDUyOTUwMDgxZjEzZjBmMTVkMzBjNjRjN2Q0OGFiNjU1YjRlZBhBKA==: 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmU4NGM3N2QxNTNjMDUyOTUwMDgxZjEzZjBmMTVkMzBjNjRjN2Q0OGFiNjU1YjRlZBhBKA==: 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: ]] 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:15.258 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.259 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:15.259 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:15.259 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:15.259 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:15.259 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.259 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.520 nvme0n1 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWIzMTE4ODFhNTFhN2QzN2EyMTAzOWU4MjlkY2NiNTkyOWIwMzU4MWJiZDA0M2NkOWFmMjE5YjFlMGFmYWYzYedMW+0=: 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWIzMTE4ODFhNTFhN2QzN2EyMTAzOWU4MjlkY2NiNTkyOWIwMzU4MWJiZDA0M2NkOWFmMjE5YjFlMGFmYWYzYedMW+0=: 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.781 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.043 nvme0n1 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJkOWE2MWI1ZWRhMWZiMzUyNTY5YWQ3MDc5YzZiMjmmnQ7g: 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJkOWE2MWI1ZWRhMWZiMzUyNTY5YWQ3MDc5YzZiMjmmnQ7g: 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: ]] 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.043 20:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.615 nvme0n1 00:27:16.615 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.615 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.615 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.615 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.615 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.615 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.615 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.615 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.615 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.615 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.615 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.615 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.615 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:16.615 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.615 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:16.615 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:16.615 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:16.615 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmNjgzOGM5ZmY4YzlmMmJhMTNiY2E4Mjg0NGQ2ZTBiZmExYzBjMzZlNGZlMWY4KWhj8g==: 00:27:16.615 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: 00:27:16.615 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:16.615 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:16.615 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmNjgzOGM5ZmY4YzlmMmJhMTNiY2E4Mjg0NGQ2ZTBiZmExYzBjMzZlNGZlMWY4KWhj8g==: 00:27:16.615 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: ]] 00:27:16.616 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: 00:27:16.616 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:16.616 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.616 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:16.616 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:16.616 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:16.616 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.616 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:16.616 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.616 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.616 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.616 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.616 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:16.616 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:16.616 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:16.616 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.616 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.616 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:16.616 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.616 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:16.616 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:16.616 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:16.616 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:16.616 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.616 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.188 nvme0n1 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODA1MzMyZmJiNzVlM2YyMmYzNjI1ZDIxYzgyZjk0NzJ4ZX58: 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODA1MzMyZmJiNzVlM2YyMmYzNjI1ZDIxYzgyZjk0NzJ4ZX58: 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: ]] 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.188 20:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.761 nvme0n1 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmU4NGM3N2QxNTNjMDUyOTUwMDgxZjEzZjBmMTVkMzBjNjRjN2Q0OGFiNjU1YjRlZBhBKA==: 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmU4NGM3N2QxNTNjMDUyOTUwMDgxZjEzZjBmMTVkMzBjNjRjN2Q0OGFiNjU1YjRlZBhBKA==: 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: ]] 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.761 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.022 nvme0n1 00:27:18.022 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.283 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.283 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.283 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.283 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.283 20:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.283 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.283 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.283 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.283 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.283 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.283 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.283 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:18.283 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.283 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:18.283 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:18.283 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:18.283 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWIzMTE4ODFhNTFhN2QzN2EyMTAzOWU4MjlkY2NiNTkyOWIwMzU4MWJiZDA0M2NkOWFmMjE5YjFlMGFmYWYzYedMW+0=: 00:27:18.283 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:18.283 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:18.283 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:18.283 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWIzMTE4ODFhNTFhN2QzN2EyMTAzOWU4MjlkY2NiNTkyOWIwMzU4MWJiZDA0M2NkOWFmMjE5YjFlMGFmYWYzYedMW+0=: 00:27:18.283 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:18.283 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:18.283 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.283 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:18.283 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:18.283 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:18.283 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.283 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:18.283 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.283 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.283 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.283 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.283 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.283 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.283 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.283 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.283 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.283 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.283 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.283 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.283 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.283 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.283 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:18.283 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.283 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.857 nvme0n1 00:27:18.857 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.857 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.857 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.857 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.857 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.857 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.857 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.857 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.857 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.857 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.857 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.857 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:18.857 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.857 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:18.857 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.857 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:18.857 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:18.857 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:18.857 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJkOWE2MWI1ZWRhMWZiMzUyNTY5YWQ3MDc5YzZiMjmmnQ7g: 00:27:18.857 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: 00:27:18.857 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:18.857 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:18.857 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJkOWE2MWI1ZWRhMWZiMzUyNTY5YWQ3MDc5YzZiMjmmnQ7g: 00:27:18.857 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: ]] 00:27:18.857 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: 00:27:18.857 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:18.857 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.857 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:18.857 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:18.858 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:18.858 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.858 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:18.858 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.858 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.858 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.858 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.858 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.858 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.858 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.858 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.858 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.858 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.858 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.858 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.858 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.858 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.858 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:18.858 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.858 20:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.432 nvme0n1 00:27:19.432 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.432 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.432 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.432 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.432 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.432 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.432 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.432 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.432 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.432 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.432 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.432 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.432 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:19.432 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.432 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:19.432 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:19.432 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:19.433 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmNjgzOGM5ZmY4YzlmMmJhMTNiY2E4Mjg0NGQ2ZTBiZmExYzBjMzZlNGZlMWY4KWhj8g==: 00:27:19.433 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: 00:27:19.433 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:19.433 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:19.433 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmNjgzOGM5ZmY4YzlmMmJhMTNiY2E4Mjg0NGQ2ZTBiZmExYzBjMzZlNGZlMWY4KWhj8g==: 00:27:19.433 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: ]] 00:27:19.433 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: 00:27:19.433 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:19.433 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.433 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:19.433 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:19.433 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:19.433 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.433 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:19.433 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.433 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.433 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.433 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.433 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:19.433 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:19.433 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:19.433 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.433 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.433 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:19.433 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.433 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:19.433 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:19.433 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:19.433 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:19.433 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.433 20:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.406 nvme0n1 00:27:20.406 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.406 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.406 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.406 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.406 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.406 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.406 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.406 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.406 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.406 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.406 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.406 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.406 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:20.406 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.406 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:20.406 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:20.406 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:20.406 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODA1MzMyZmJiNzVlM2YyMmYzNjI1ZDIxYzgyZjk0NzJ4ZX58: 00:27:20.406 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: 00:27:20.406 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:20.406 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:20.406 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODA1MzMyZmJiNzVlM2YyMmYzNjI1ZDIxYzgyZjk0NzJ4ZX58: 00:27:20.407 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: ]] 00:27:20.407 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: 00:27:20.407 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:20.407 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.407 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:20.407 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:20.407 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:20.407 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.407 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:20.407 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.407 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.407 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.407 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.407 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:20.407 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.407 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.407 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.407 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.407 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.407 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.407 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.407 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.407 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.407 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:20.407 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.407 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.350 nvme0n1 00:27:21.350 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.350 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.350 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.350 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.350 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.350 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.350 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.350 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.350 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.350 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.350 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.350 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.350 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:21.350 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.350 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:21.350 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:21.350 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:21.350 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmU4NGM3N2QxNTNjMDUyOTUwMDgxZjEzZjBmMTVkMzBjNjRjN2Q0OGFiNjU1YjRlZBhBKA==: 00:27:21.350 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: 00:27:21.350 20:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:21.350 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:21.350 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmU4NGM3N2QxNTNjMDUyOTUwMDgxZjEzZjBmMTVkMzBjNjRjN2Q0OGFiNjU1YjRlZBhBKA==: 00:27:21.350 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: ]] 00:27:21.350 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: 00:27:21.350 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:21.350 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.350 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:21.350 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:21.350 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:21.350 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.350 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:21.350 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.350 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.350 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.350 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.350 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:21.350 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:21.350 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:21.350 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.350 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.350 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:21.350 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.350 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:21.350 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:21.350 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:21.350 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:21.350 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.350 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.953 nvme0n1 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWIzMTE4ODFhNTFhN2QzN2EyMTAzOWU4MjlkY2NiNTkyOWIwMzU4MWJiZDA0M2NkOWFmMjE5YjFlMGFmYWYzYedMW+0=: 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWIzMTE4ODFhNTFhN2QzN2EyMTAzOWU4MjlkY2NiNTkyOWIwMzU4MWJiZDA0M2NkOWFmMjE5YjFlMGFmYWYzYedMW+0=: 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.953 20:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.896 nvme0n1 00:27:22.896 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.896 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.896 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.896 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.896 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.896 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.896 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.896 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.896 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.896 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.896 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.896 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:22.896 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:22.896 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.896 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:22.896 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.897 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:22.897 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:22.897 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:22.897 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJkOWE2MWI1ZWRhMWZiMzUyNTY5YWQ3MDc5YzZiMjmmnQ7g: 00:27:22.897 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: 00:27:22.897 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:22.897 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:22.897 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJkOWE2MWI1ZWRhMWZiMzUyNTY5YWQ3MDc5YzZiMjmmnQ7g: 00:27:22.897 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: ]] 00:27:22.897 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: 00:27:22.897 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:22.897 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.897 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:22.897 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:22.897 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:22.897 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.897 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:22.897 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.897 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.897 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.897 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.897 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:22.897 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:22.897 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:22.897 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.897 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.897 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:22.897 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.897 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:22.897 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:22.897 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:22.897 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:22.897 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.897 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.897 nvme0n1 00:27:22.897 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmNjgzOGM5ZmY4YzlmMmJhMTNiY2E4Mjg0NGQ2ZTBiZmExYzBjMzZlNGZlMWY4KWhj8g==: 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmNjgzOGM5ZmY4YzlmMmJhMTNiY2E4Mjg0NGQ2ZTBiZmExYzBjMzZlNGZlMWY4KWhj8g==: 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: ]] 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.157 20:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.157 nvme0n1 00:27:23.157 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.157 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.157 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.157 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.157 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.157 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.417 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.417 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.417 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.417 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.417 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.417 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.417 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:23.417 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.417 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODA1MzMyZmJiNzVlM2YyMmYzNjI1ZDIxYzgyZjk0NzJ4ZX58: 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODA1MzMyZmJiNzVlM2YyMmYzNjI1ZDIxYzgyZjk0NzJ4ZX58: 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: ]] 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.418 nvme0n1 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.418 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmU4NGM3N2QxNTNjMDUyOTUwMDgxZjEzZjBmMTVkMzBjNjRjN2Q0OGFiNjU1YjRlZBhBKA==: 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmU4NGM3N2QxNTNjMDUyOTUwMDgxZjEzZjBmMTVkMzBjNjRjN2Q0OGFiNjU1YjRlZBhBKA==: 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: ]] 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.679 nvme0n1 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:23.679 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWIzMTE4ODFhNTFhN2QzN2EyMTAzOWU4MjlkY2NiNTkyOWIwMzU4MWJiZDA0M2NkOWFmMjE5YjFlMGFmYWYzYedMW+0=: 00:27:23.680 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:23.680 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:23.680 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:23.680 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWIzMTE4ODFhNTFhN2QzN2EyMTAzOWU4MjlkY2NiNTkyOWIwMzU4MWJiZDA0M2NkOWFmMjE5YjFlMGFmYWYzYedMW+0=: 00:27:23.680 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:23.680 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:23.680 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.680 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:23.680 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:23.680 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:23.680 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.680 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:23.680 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.680 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.680 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.680 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.680 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:23.680 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.941 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.941 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.941 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.941 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.941 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.941 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.941 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.941 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:23.941 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:23.941 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.941 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.941 nvme0n1 00:27:23.941 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.941 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.941 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.941 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.941 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.941 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.941 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.941 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.941 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.941 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.941 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.941 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:23.941 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.941 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:23.941 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.941 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:23.941 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:23.941 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:23.941 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJkOWE2MWI1ZWRhMWZiMzUyNTY5YWQ3MDc5YzZiMjmmnQ7g: 00:27:23.941 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: 00:27:23.941 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:23.941 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:23.941 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJkOWE2MWI1ZWRhMWZiMzUyNTY5YWQ3MDc5YzZiMjmmnQ7g: 00:27:23.941 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: ]] 00:27:23.941 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: 00:27:23.941 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:23.941 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.941 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:23.941 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:23.941 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:23.942 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.942 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:23.942 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.942 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.942 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.942 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.942 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:23.942 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.942 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.942 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.942 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.942 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.942 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.942 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.942 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.942 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:23.942 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:23.942 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.942 20:07:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.202 nvme0n1 00:27:24.202 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.202 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.202 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.202 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.202 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.202 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.202 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.202 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.202 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.202 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.202 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.202 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.202 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:24.202 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.202 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:24.202 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:24.202 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:24.202 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmNjgzOGM5ZmY4YzlmMmJhMTNiY2E4Mjg0NGQ2ZTBiZmExYzBjMzZlNGZlMWY4KWhj8g==: 00:27:24.202 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: 00:27:24.202 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:24.202 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:24.202 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmNjgzOGM5ZmY4YzlmMmJhMTNiY2E4Mjg0NGQ2ZTBiZmExYzBjMzZlNGZlMWY4KWhj8g==: 00:27:24.202 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: ]] 00:27:24.202 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: 00:27:24.202 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:24.203 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.203 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:24.203 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:24.203 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:24.203 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.203 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:24.203 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.203 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.203 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.203 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.203 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:24.203 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:24.203 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:24.203 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.203 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.203 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:24.203 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.203 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:24.203 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:24.203 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:24.203 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:24.203 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.203 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.465 nvme0n1 00:27:24.465 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.465 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.465 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.465 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.465 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.465 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.465 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.465 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.465 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.465 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.465 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.465 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.465 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:24.465 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.465 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:24.465 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:24.465 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:24.465 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODA1MzMyZmJiNzVlM2YyMmYzNjI1ZDIxYzgyZjk0NzJ4ZX58: 00:27:24.465 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: 00:27:24.465 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:24.465 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:24.465 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODA1MzMyZmJiNzVlM2YyMmYzNjI1ZDIxYzgyZjk0NzJ4ZX58: 00:27:24.465 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: ]] 00:27:24.465 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: 00:27:24.465 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:24.465 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.465 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:24.465 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:24.465 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:24.465 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.465 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:24.465 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.465 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.465 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.465 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.465 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:24.465 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.727 nvme0n1 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmU4NGM3N2QxNTNjMDUyOTUwMDgxZjEzZjBmMTVkMzBjNjRjN2Q0OGFiNjU1YjRlZBhBKA==: 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmU4NGM3N2QxNTNjMDUyOTUwMDgxZjEzZjBmMTVkMzBjNjRjN2Q0OGFiNjU1YjRlZBhBKA==: 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: ]] 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.727 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.988 nvme0n1 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWIzMTE4ODFhNTFhN2QzN2EyMTAzOWU4MjlkY2NiNTkyOWIwMzU4MWJiZDA0M2NkOWFmMjE5YjFlMGFmYWYzYedMW+0=: 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWIzMTE4ODFhNTFhN2QzN2EyMTAzOWU4MjlkY2NiNTkyOWIwMzU4MWJiZDA0M2NkOWFmMjE5YjFlMGFmYWYzYedMW+0=: 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.988 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.249 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.249 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.249 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:25.249 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:25.249 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:25.249 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.249 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.249 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:25.249 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.249 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:25.249 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:25.249 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:25.249 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:25.249 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.249 20:07:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.249 nvme0n1 00:27:25.249 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.249 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.249 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.249 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.249 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.249 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.249 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.249 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.249 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.249 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.510 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.510 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:25.510 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.511 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:25.511 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.511 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:25.511 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:25.511 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:25.511 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJkOWE2MWI1ZWRhMWZiMzUyNTY5YWQ3MDc5YzZiMjmmnQ7g: 00:27:25.511 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: 00:27:25.511 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:25.511 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:25.511 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJkOWE2MWI1ZWRhMWZiMzUyNTY5YWQ3MDc5YzZiMjmmnQ7g: 00:27:25.511 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: ]] 00:27:25.511 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: 00:27:25.511 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:25.511 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.511 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:25.511 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:25.511 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:25.511 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.511 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:25.511 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.511 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.511 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.511 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.511 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:25.511 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:25.511 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:25.511 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.511 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.511 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:25.511 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.511 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:25.511 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:25.511 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:25.511 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:25.511 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.511 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.771 nvme0n1 00:27:25.771 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.771 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.771 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.771 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.771 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.771 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.771 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.771 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.771 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.771 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.771 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.771 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.771 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:25.771 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.771 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:25.771 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:25.771 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:25.771 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmNjgzOGM5ZmY4YzlmMmJhMTNiY2E4Mjg0NGQ2ZTBiZmExYzBjMzZlNGZlMWY4KWhj8g==: 00:27:25.771 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: 00:27:25.771 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:25.771 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:25.771 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmNjgzOGM5ZmY4YzlmMmJhMTNiY2E4Mjg0NGQ2ZTBiZmExYzBjMzZlNGZlMWY4KWhj8g==: 00:27:25.771 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: ]] 00:27:25.771 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: 00:27:25.771 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:25.771 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.771 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:25.771 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:25.771 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:25.771 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.771 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:25.772 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.772 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.772 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.772 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.772 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:25.772 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:25.772 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:25.772 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.772 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.772 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:25.772 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.772 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:25.772 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:25.772 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:25.772 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:25.772 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.772 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.033 nvme0n1 00:27:26.033 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.033 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.033 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.033 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.033 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.033 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.033 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.033 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.033 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.033 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.033 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.033 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.033 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:26.033 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.033 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:26.033 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:26.033 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:26.033 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODA1MzMyZmJiNzVlM2YyMmYzNjI1ZDIxYzgyZjk0NzJ4ZX58: 00:27:26.033 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: 00:27:26.033 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:26.033 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:26.033 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODA1MzMyZmJiNzVlM2YyMmYzNjI1ZDIxYzgyZjk0NzJ4ZX58: 00:27:26.033 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: ]] 00:27:26.033 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: 00:27:26.033 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:26.033 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.033 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:26.034 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:26.034 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:26.034 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.034 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:26.034 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.034 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.034 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.034 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.034 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.034 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.034 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.034 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.034 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.034 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.034 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.034 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.034 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.034 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.034 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:26.034 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.034 20:07:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.294 nvme0n1 00:27:26.294 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.294 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.294 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.294 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.294 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.294 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.555 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.555 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.555 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.555 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.555 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.555 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.555 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:26.555 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.555 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:26.555 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:26.555 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:26.555 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmU4NGM3N2QxNTNjMDUyOTUwMDgxZjEzZjBmMTVkMzBjNjRjN2Q0OGFiNjU1YjRlZBhBKA==: 00:27:26.555 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: 00:27:26.555 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:26.555 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:26.555 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmU4NGM3N2QxNTNjMDUyOTUwMDgxZjEzZjBmMTVkMzBjNjRjN2Q0OGFiNjU1YjRlZBhBKA==: 00:27:26.555 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: ]] 00:27:26.555 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: 00:27:26.555 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:26.555 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.555 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:26.555 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:26.555 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:26.556 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.556 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:26.556 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.556 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.556 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.556 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.556 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.556 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.556 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.556 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.556 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.556 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.556 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.556 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.556 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.556 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.556 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:26.556 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.556 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.817 nvme0n1 00:27:26.817 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.817 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.817 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.817 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.817 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.817 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.817 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.817 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.817 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.817 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.817 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.817 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.817 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:26.817 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.817 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:26.817 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:26.817 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:26.817 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWIzMTE4ODFhNTFhN2QzN2EyMTAzOWU4MjlkY2NiNTkyOWIwMzU4MWJiZDA0M2NkOWFmMjE5YjFlMGFmYWYzYedMW+0=: 00:27:26.817 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:26.817 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:26.817 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:26.817 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWIzMTE4ODFhNTFhN2QzN2EyMTAzOWU4MjlkY2NiNTkyOWIwMzU4MWJiZDA0M2NkOWFmMjE5YjFlMGFmYWYzYedMW+0=: 00:27:26.817 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:26.817 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:26.817 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.817 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:26.817 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:26.817 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:26.817 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.817 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:26.817 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.817 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.817 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.817 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.817 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.817 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.817 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.817 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.817 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.817 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.818 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.818 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.818 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.818 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.818 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:26.818 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.818 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.078 nvme0n1 00:27:27.078 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.078 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.078 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.078 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.078 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.078 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.078 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.078 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.078 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.078 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.078 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.078 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:27.078 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.078 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:27.078 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.078 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:27.078 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:27.078 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:27.078 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJkOWE2MWI1ZWRhMWZiMzUyNTY5YWQ3MDc5YzZiMjmmnQ7g: 00:27:27.078 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: 00:27:27.078 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:27.078 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:27.078 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJkOWE2MWI1ZWRhMWZiMzUyNTY5YWQ3MDc5YzZiMjmmnQ7g: 00:27:27.078 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: ]] 00:27:27.078 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: 00:27:27.079 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:27.079 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.079 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:27.079 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:27.079 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:27.079 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.079 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:27.079 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.079 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.079 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.079 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.079 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:27.079 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:27.079 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:27.079 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.079 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.079 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:27.079 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.079 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:27.079 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:27.079 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:27.079 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:27.079 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.079 20:07:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.651 nvme0n1 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmNjgzOGM5ZmY4YzlmMmJhMTNiY2E4Mjg0NGQ2ZTBiZmExYzBjMzZlNGZlMWY4KWhj8g==: 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmNjgzOGM5ZmY4YzlmMmJhMTNiY2E4Mjg0NGQ2ZTBiZmExYzBjMzZlNGZlMWY4KWhj8g==: 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: ]] 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:27.651 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:27.652 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:27.652 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.652 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.224 nvme0n1 00:27:28.224 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.224 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.224 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.224 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.224 20:07:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.224 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.224 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.224 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.224 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.224 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.224 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.224 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.224 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:28.224 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.224 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:28.224 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:28.224 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:28.224 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODA1MzMyZmJiNzVlM2YyMmYzNjI1ZDIxYzgyZjk0NzJ4ZX58: 00:27:28.224 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: 00:27:28.225 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:28.225 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:28.225 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODA1MzMyZmJiNzVlM2YyMmYzNjI1ZDIxYzgyZjk0NzJ4ZX58: 00:27:28.225 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: ]] 00:27:28.225 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: 00:27:28.225 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:28.225 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.225 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:28.225 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:28.225 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:28.225 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.225 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:28.225 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.225 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.225 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.225 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.225 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.225 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.225 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.225 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.225 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.225 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.225 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.225 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.225 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.225 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.225 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:28.225 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.225 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.796 nvme0n1 00:27:28.796 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.796 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.796 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.796 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.796 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.796 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.796 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.796 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.796 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.796 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.796 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.796 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.796 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:28.797 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.797 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:28.797 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:28.797 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:28.797 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmU4NGM3N2QxNTNjMDUyOTUwMDgxZjEzZjBmMTVkMzBjNjRjN2Q0OGFiNjU1YjRlZBhBKA==: 00:27:28.797 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: 00:27:28.797 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:28.797 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:28.797 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmU4NGM3N2QxNTNjMDUyOTUwMDgxZjEzZjBmMTVkMzBjNjRjN2Q0OGFiNjU1YjRlZBhBKA==: 00:27:28.797 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: ]] 00:27:28.797 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: 00:27:28.797 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:28.797 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.797 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:28.797 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:28.797 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:28.797 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.797 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:28.797 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.797 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.797 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.797 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.797 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.797 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.797 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.797 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.797 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.797 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.797 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.797 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.797 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.797 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.797 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:28.797 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.797 20:07:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.369 nvme0n1 00:27:29.369 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.369 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.369 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.369 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.369 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.369 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.369 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.369 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.369 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.369 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.369 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.369 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.369 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:29.369 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.369 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:29.369 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:29.369 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:29.369 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWIzMTE4ODFhNTFhN2QzN2EyMTAzOWU4MjlkY2NiNTkyOWIwMzU4MWJiZDA0M2NkOWFmMjE5YjFlMGFmYWYzYedMW+0=: 00:27:29.369 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:29.369 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:29.369 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:29.369 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWIzMTE4ODFhNTFhN2QzN2EyMTAzOWU4MjlkY2NiNTkyOWIwMzU4MWJiZDA0M2NkOWFmMjE5YjFlMGFmYWYzYedMW+0=: 00:27:29.369 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:29.369 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:29.369 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.369 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:29.369 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:29.369 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:29.369 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.369 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:29.369 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.369 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.369 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.369 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.369 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.369 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.369 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.369 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.369 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.369 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.369 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.370 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.370 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.370 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.370 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:29.370 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.370 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.941 nvme0n1 00:27:29.941 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.941 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.941 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.941 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.941 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.941 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.941 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.941 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.941 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.941 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.941 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.941 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:29.941 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.941 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:29.941 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.941 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:29.941 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:29.941 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:29.941 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJkOWE2MWI1ZWRhMWZiMzUyNTY5YWQ3MDc5YzZiMjmmnQ7g: 00:27:29.941 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: 00:27:29.941 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:29.941 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:29.941 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJkOWE2MWI1ZWRhMWZiMzUyNTY5YWQ3MDc5YzZiMjmmnQ7g: 00:27:29.941 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: ]] 00:27:29.941 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzQzYmZiMmZjMGY5N2UwNzA1ODM0NDQ1YzdjYTc4MzJlYmEzMWFhZDJmYjM4NmQ3MmY3Y2E4MGZhN2MzM2RmZgT/G+c=: 00:27:29.941 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:29.941 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.941 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:29.941 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:29.941 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:29.941 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.941 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:29.942 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.942 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.942 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.942 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.942 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.942 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.942 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.942 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.942 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.942 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.942 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.942 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.942 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.942 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.942 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:29.942 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.942 20:07:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.513 nvme0n1 00:27:30.513 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.513 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.513 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.513 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.513 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.513 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.774 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.775 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.775 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.775 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.775 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.775 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.775 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:30.775 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.775 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:30.775 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:30.775 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:30.775 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmNjgzOGM5ZmY4YzlmMmJhMTNiY2E4Mjg0NGQ2ZTBiZmExYzBjMzZlNGZlMWY4KWhj8g==: 00:27:30.775 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: 00:27:30.775 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:30.775 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:30.775 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmNjgzOGM5ZmY4YzlmMmJhMTNiY2E4Mjg0NGQ2ZTBiZmExYzBjMzZlNGZlMWY4KWhj8g==: 00:27:30.775 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: ]] 00:27:30.775 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: 00:27:30.775 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:30.775 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.775 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:30.775 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:30.775 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:30.775 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.775 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:30.775 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.775 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.775 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.775 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.775 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.775 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.775 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.775 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.775 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.775 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.775 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.775 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.775 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.775 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.775 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:30.775 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.775 20:07:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.347 nvme0n1 00:27:31.347 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.347 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.347 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.347 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.347 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.347 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODA1MzMyZmJiNzVlM2YyMmYzNjI1ZDIxYzgyZjk0NzJ4ZX58: 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODA1MzMyZmJiNzVlM2YyMmYzNjI1ZDIxYzgyZjk0NzJ4ZX58: 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: ]] 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzJlOGFkOGYwNTQ2YTFiMjU4MTc4NTRmOGQ1NTdmNTOK36v8: 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.608 20:07:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.179 nvme0n1 00:27:32.179 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.179 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.179 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.179 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.179 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.179 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.440 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.440 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.440 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.440 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.440 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.440 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.440 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:32.441 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.441 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:32.441 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:32.441 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:32.441 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmU4NGM3N2QxNTNjMDUyOTUwMDgxZjEzZjBmMTVkMzBjNjRjN2Q0OGFiNjU1YjRlZBhBKA==: 00:27:32.441 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: 00:27:32.441 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:32.441 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:32.441 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmU4NGM3N2QxNTNjMDUyOTUwMDgxZjEzZjBmMTVkMzBjNjRjN2Q0OGFiNjU1YjRlZBhBKA==: 00:27:32.441 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: ]] 00:27:32.441 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjAwNDQ0MDIxOTE3NmM4ZjA3OGVkYzE3NmQ1N2QwYTXmgwZS: 00:27:32.441 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:32.441 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.441 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:32.441 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:32.441 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:32.441 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.441 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:32.441 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.441 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.441 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.441 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.441 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.441 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.441 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.441 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.441 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.441 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.441 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.441 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.441 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.441 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.441 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:32.441 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.441 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.011 nvme0n1 00:27:33.011 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.272 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.272 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.272 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.272 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.272 20:07:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.272 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.272 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.272 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.272 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.272 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.272 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.272 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:33.272 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.272 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:33.273 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:33.273 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:33.273 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWIzMTE4ODFhNTFhN2QzN2EyMTAzOWU4MjlkY2NiNTkyOWIwMzU4MWJiZDA0M2NkOWFmMjE5YjFlMGFmYWYzYedMW+0=: 00:27:33.273 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:33.273 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:33.273 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:33.273 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWIzMTE4ODFhNTFhN2QzN2EyMTAzOWU4MjlkY2NiNTkyOWIwMzU4MWJiZDA0M2NkOWFmMjE5YjFlMGFmYWYzYedMW+0=: 00:27:33.273 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:33.273 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:33.273 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.273 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:33.273 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:33.273 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:33.273 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.273 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:33.273 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.273 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.273 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.273 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.273 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.273 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.273 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.273 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.273 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.273 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.273 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.273 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.273 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.273 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.273 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:33.273 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.273 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.215 nvme0n1 00:27:34.215 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.215 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.215 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.215 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.215 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.215 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.215 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.215 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.215 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.215 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.215 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.215 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:34.215 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjhmNjgzOGM5ZmY4YzlmMmJhMTNiY2E4Mjg0NGQ2ZTBiZmExYzBjMzZlNGZlMWY4KWhj8g==: 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjhmNjgzOGM5ZmY4YzlmMmJhMTNiY2E4Mjg0NGQ2ZTBiZmExYzBjMzZlNGZlMWY4KWhj8g==: 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: ]] 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjU0NzQxMTg3MTA2ZDUyYTJlZDZjNjk2ODM2YThlNTZiODZjMjc0YWU3MDEyMzlk0niw/Q==: 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.216 request: 00:27:34.216 { 00:27:34.216 "name": "nvme0", 00:27:34.216 "trtype": "tcp", 00:27:34.216 "traddr": "10.0.0.1", 00:27:34.216 "adrfam": "ipv4", 00:27:34.216 "trsvcid": "4420", 00:27:34.216 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:34.216 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:34.216 "prchk_reftag": false, 00:27:34.216 "prchk_guard": false, 00:27:34.216 "hdgst": false, 00:27:34.216 "ddgst": false, 00:27:34.216 "method": "bdev_nvme_attach_controller", 00:27:34.216 "req_id": 1 00:27:34.216 } 00:27:34.216 Got JSON-RPC error response 00:27:34.216 response: 00:27:34.216 { 00:27:34.216 "code": -5, 00:27:34.216 "message": "Input/output error" 00:27:34.216 } 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.216 20:07:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.216 request: 00:27:34.216 { 00:27:34.216 "name": "nvme0", 00:27:34.216 "trtype": "tcp", 00:27:34.216 "traddr": "10.0.0.1", 00:27:34.216 "adrfam": "ipv4", 00:27:34.216 "trsvcid": "4420", 00:27:34.216 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:34.216 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:34.216 "prchk_reftag": false, 00:27:34.216 "prchk_guard": false, 00:27:34.216 "hdgst": false, 00:27:34.216 "ddgst": false, 00:27:34.216 "dhchap_key": "key2", 00:27:34.216 "method": "bdev_nvme_attach_controller", 00:27:34.216 "req_id": 1 00:27:34.216 } 00:27:34.216 Got JSON-RPC error response 00:27:34.216 response: 00:27:34.216 { 00:27:34.216 "code": -5, 00:27:34.216 "message": "Input/output error" 00:27:34.216 } 00:27:34.216 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:34.216 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:34.216 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:34.216 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:34.216 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:34.216 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.216 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:34.216 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.216 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.216 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.216 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:34.216 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:34.216 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.216 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.216 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.216 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.216 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.216 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.216 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.216 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.216 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.216 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.217 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:34.217 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:34.217 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:34.217 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:34.217 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:34.217 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:34.217 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:34.217 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:34.217 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.217 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.478 request: 00:27:34.478 { 00:27:34.478 "name": "nvme0", 00:27:34.478 "trtype": "tcp", 00:27:34.478 "traddr": "10.0.0.1", 00:27:34.478 "adrfam": "ipv4", 00:27:34.478 "trsvcid": "4420", 00:27:34.478 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:34.478 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:34.478 "prchk_reftag": false, 00:27:34.478 "prchk_guard": false, 00:27:34.478 "hdgst": false, 00:27:34.478 "ddgst": false, 00:27:34.478 "dhchap_key": "key1", 00:27:34.478 "dhchap_ctrlr_key": "ckey2", 00:27:34.478 "method": "bdev_nvme_attach_controller", 00:27:34.478 "req_id": 1 00:27:34.478 } 00:27:34.478 Got JSON-RPC error response 00:27:34.478 response: 00:27:34.478 { 00:27:34.478 "code": -5, 00:27:34.478 "message": "Input/output error" 00:27:34.478 } 00:27:34.478 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:34.478 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:34.478 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:34.478 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:34.478 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:34.478 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:27:34.478 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:27:34.478 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:34.478 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:34.478 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:27:34.478 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:34.478 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:27:34.478 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:34.478 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:34.478 rmmod nvme_tcp 00:27:34.478 rmmod nvme_fabrics 00:27:34.478 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:34.478 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:27:34.478 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:27:34.478 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3826788 ']' 00:27:34.478 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3826788 00:27:34.478 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 3826788 ']' 00:27:34.478 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 3826788 00:27:34.478 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:27:34.478 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:34.478 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3826788 00:27:34.478 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:34.478 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:34.478 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3826788' 00:27:34.478 killing process with pid 3826788 00:27:34.478 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 3826788 00:27:34.478 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 3826788 00:27:34.739 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:34.739 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:34.739 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:34.739 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:34.739 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:34.739 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:34.739 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:34.739 20:07:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.654 20:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:36.654 20:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:36.654 20:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:36.654 20:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:36.654 20:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:36.654 20:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:27:36.654 20:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:36.654 20:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:36.654 20:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:36.654 20:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:36.654 20:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:36.654 20:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:36.654 20:07:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:40.905 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:40.905 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:40.905 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:40.905 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:40.905 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:40.905 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:40.905 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:40.905 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:40.905 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:40.905 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:40.905 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:40.905 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:40.905 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:40.905 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:40.905 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:40.905 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:40.905 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:40.905 20:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.I4i /tmp/spdk.key-null.Wgl /tmp/spdk.key-sha256.x7r /tmp/spdk.key-sha384.9QH /tmp/spdk.key-sha512.lTT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:40.905 20:07:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:44.239 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:44.239 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:44.239 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:44.239 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:44.239 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:44.239 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:44.239 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:44.239 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:44.239 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:44.239 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:27:44.239 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:44.239 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:44.239 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:44.239 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:44.239 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:44.239 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:44.239 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:44.239 00:27:44.239 real 0m58.291s 00:27:44.239 user 0m52.485s 00:27:44.239 sys 0m14.700s 00:27:44.239 20:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:44.239 20:07:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.239 ************************************ 00:27:44.239 END TEST nvmf_auth_host 00:27:44.239 ************************************ 00:27:44.239 20:07:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:27:44.239 20:07:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:44.239 20:07:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:44.239 20:07:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:44.239 20:07:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.239 ************************************ 00:27:44.239 START TEST nvmf_digest 00:27:44.239 ************************************ 00:27:44.239 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:44.239 * Looking for test storage... 00:27:44.239 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:44.239 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:44.239 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:44.239 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:44.239 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:44.239 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:44.239 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:44.239 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:44.239 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:44.239 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:44.239 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:44.239 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:44.239 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:44.239 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:44.239 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:44.239 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:44.239 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:44.239 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:44.239 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:44.239 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:44.239 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:44.239 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:44.239 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:44.239 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.239 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.240 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.240 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:44.240 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.240 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:27:44.240 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:44.240 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:44.240 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:44.240 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:44.240 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:44.240 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:44.240 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:44.240 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:44.240 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:44.240 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:44.240 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:44.240 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:44.240 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:44.240 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:44.240 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:44.240 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:44.240 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:44.240 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:44.240 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.240 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:44.240 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.240 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:44.240 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:44.240 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:27:44.240 20:07:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:52.379 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:52.379 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:27:52.379 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:52.379 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:52.379 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:52.379 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:52.379 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:52.379 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:27:52.379 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:52.379 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:27:52.379 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:27:52.379 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:27:52.379 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:27:52.379 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:27:52.379 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:27:52.379 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:52.379 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:52.379 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:52.379 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:52.379 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:52.379 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:52.379 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:52.379 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:52.379 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:52.379 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:52.379 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:52.379 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:52.379 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:52.379 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:52.379 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:52.379 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:52.379 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:52.379 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:52.379 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:52.379 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:52.379 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:52.379 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:52.380 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:52.380 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:52.380 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:52.380 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:52.380 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.747 ms 00:27:52.380 00:27:52.380 --- 10.0.0.2 ping statistics --- 00:27:52.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.380 rtt min/avg/max/mdev = 0.747/0.747/0.747/0.000 ms 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:52.380 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:52.380 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.353 ms 00:27:52.380 00:27:52.380 --- 10.0.0.1 ping statistics --- 00:27:52.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.380 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:52.380 ************************************ 00:27:52.380 START TEST nvmf_digest_clean 00:27:52.380 ************************************ 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=3843511 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 3843511 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3843511 ']' 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:52.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:52.380 20:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:52.381 [2024-07-24 20:07:39.596876] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:27:52.381 [2024-07-24 20:07:39.596927] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:52.381 EAL: No free 2048 kB hugepages reported on node 1 00:27:52.381 [2024-07-24 20:07:39.663255] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.381 [2024-07-24 20:07:39.730640] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:52.381 [2024-07-24 20:07:39.730676] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:52.381 [2024-07-24 20:07:39.730684] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:52.381 [2024-07-24 20:07:39.730690] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:52.381 [2024-07-24 20:07:39.730696] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:52.381 [2024-07-24 20:07:39.730714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:52.641 20:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:52.641 20:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:52.641 20:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:52.641 20:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:52.641 20:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:52.641 20:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:52.641 20:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:52.641 20:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:52.641 20:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:52.641 20:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.641 20:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:52.641 null0 00:27:52.641 [2024-07-24 20:07:40.469088] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:52.641 [2024-07-24 20:07:40.493300] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:52.641 20:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.641 20:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:52.641 20:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:52.641 20:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:52.641 20:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:52.641 20:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:52.641 20:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:52.641 20:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:52.641 20:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3843562 00:27:52.641 20:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3843562 /var/tmp/bperf.sock 00:27:52.641 20:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3843562 ']' 00:27:52.641 20:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:52.641 20:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:52.641 20:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:52.641 20:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:52.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:52.641 20:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:52.641 20:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:52.641 [2024-07-24 20:07:40.548379] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:27:52.641 [2024-07-24 20:07:40.548427] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3843562 ] 00:27:52.641 EAL: No free 2048 kB hugepages reported on node 1 00:27:52.901 [2024-07-24 20:07:40.623451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.901 [2024-07-24 20:07:40.687843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:53.471 20:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:53.471 20:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:53.471 20:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:53.471 20:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:53.471 20:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:53.732 20:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:53.732 20:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:53.992 nvme0n1 00:27:53.992 20:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:53.992 20:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:54.253 Running I/O for 2 seconds... 00:27:56.166 00:27:56.166 Latency(us) 00:27:56.166 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:56.166 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:56.166 nvme0n1 : 2.00 20895.61 81.62 0.00 0.00 6117.50 3044.69 16165.55 00:27:56.166 =================================================================================================================== 00:27:56.166 Total : 20895.61 81.62 0.00 0.00 6117.50 3044.69 16165.55 00:27:56.166 0 00:27:56.166 20:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:56.166 20:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:56.166 20:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:56.166 20:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:56.166 20:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:56.166 | select(.opcode=="crc32c") 00:27:56.166 | "\(.module_name) \(.executed)"' 00:27:56.427 20:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:56.427 20:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:56.427 20:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:56.427 20:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:56.427 20:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3843562 00:27:56.427 20:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3843562 ']' 00:27:56.427 20:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3843562 00:27:56.427 20:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:56.427 20:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:56.427 20:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3843562 00:27:56.427 20:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:56.428 20:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:56.428 20:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3843562' 00:27:56.428 killing process with pid 3843562 00:27:56.428 20:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3843562 00:27:56.428 Received shutdown signal, test time was about 2.000000 seconds 00:27:56.428 00:27:56.428 Latency(us) 00:27:56.428 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:56.428 =================================================================================================================== 00:27:56.428 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:56.428 20:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3843562 00:27:56.428 20:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:56.428 20:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:56.428 20:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:56.428 20:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:56.428 20:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:56.428 20:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:56.428 20:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:56.428 20:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3844379 00:27:56.428 20:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3844379 /var/tmp/bperf.sock 00:27:56.428 20:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3844379 ']' 00:27:56.428 20:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:56.428 20:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:56.428 20:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:56.428 20:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:56.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:56.428 20:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:56.428 20:07:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:56.428 [2024-07-24 20:07:44.369221] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:27:56.428 [2024-07-24 20:07:44.369283] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3844379 ] 00:27:56.428 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:56.428 Zero copy mechanism will not be used. 00:27:56.689 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.689 [2024-07-24 20:07:44.444721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.689 [2024-07-24 20:07:44.497873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:57.260 20:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:57.260 20:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:57.260 20:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:57.260 20:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:57.260 20:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:57.520 20:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:57.520 20:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:57.780 nvme0n1 00:27:57.780 20:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:57.780 20:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:57.780 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:57.780 Zero copy mechanism will not be used. 00:27:57.780 Running I/O for 2 seconds... 00:28:00.326 00:28:00.326 Latency(us) 00:28:00.326 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:00.326 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:00.326 nvme0n1 : 2.00 1995.44 249.43 0.00 0.00 8014.50 1672.53 12451.84 00:28:00.326 =================================================================================================================== 00:28:00.326 Total : 1995.44 249.43 0.00 0.00 8014.50 1672.53 12451.84 00:28:00.326 0 00:28:00.326 20:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:00.326 20:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:00.326 20:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:00.326 20:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:00.326 | select(.opcode=="crc32c") 00:28:00.326 | "\(.module_name) \(.executed)"' 00:28:00.326 20:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:00.326 20:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:00.326 20:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:00.326 20:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:00.326 20:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:00.326 20:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3844379 00:28:00.326 20:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3844379 ']' 00:28:00.326 20:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3844379 00:28:00.326 20:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:00.326 20:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:00.326 20:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3844379 00:28:00.326 20:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:00.326 20:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:00.326 20:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3844379' 00:28:00.326 killing process with pid 3844379 00:28:00.326 20:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3844379 00:28:00.326 Received shutdown signal, test time was about 2.000000 seconds 00:28:00.326 00:28:00.326 Latency(us) 00:28:00.326 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:00.326 =================================================================================================================== 00:28:00.326 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:00.326 20:07:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3844379 00:28:00.326 20:07:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:00.326 20:07:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:00.326 20:07:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:00.326 20:07:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:00.326 20:07:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:00.326 20:07:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:00.326 20:07:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:00.326 20:07:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3845165 00:28:00.326 20:07:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3845165 /var/tmp/bperf.sock 00:28:00.326 20:07:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3845165 ']' 00:28:00.326 20:07:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:00.326 20:07:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:00.326 20:07:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:00.326 20:07:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:00.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:00.326 20:07:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:00.326 20:07:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:00.326 [2024-07-24 20:07:48.105816] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:28:00.326 [2024-07-24 20:07:48.105874] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3845165 ] 00:28:00.326 EAL: No free 2048 kB hugepages reported on node 1 00:28:00.326 [2024-07-24 20:07:48.180848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.326 [2024-07-24 20:07:48.233783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:01.268 20:07:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:01.268 20:07:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:01.268 20:07:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:01.268 20:07:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:01.268 20:07:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:01.268 20:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:01.268 20:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:01.529 nvme0n1 00:28:01.790 20:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:01.790 20:07:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:01.790 Running I/O for 2 seconds... 00:28:03.700 00:28:03.700 Latency(us) 00:28:03.700 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:03.700 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:03.700 nvme0n1 : 2.01 21876.62 85.46 0.00 0.00 5843.14 2525.87 9175.04 00:28:03.700 =================================================================================================================== 00:28:03.700 Total : 21876.62 85.46 0.00 0.00 5843.14 2525.87 9175.04 00:28:03.700 0 00:28:03.700 20:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:03.700 20:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:03.700 20:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:03.700 20:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:03.700 | select(.opcode=="crc32c") 00:28:03.700 | "\(.module_name) \(.executed)"' 00:28:03.700 20:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:03.961 20:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:03.961 20:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:03.961 20:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:03.961 20:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:03.961 20:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3845165 00:28:03.961 20:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3845165 ']' 00:28:03.961 20:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3845165 00:28:03.961 20:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:03.961 20:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:03.961 20:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3845165 00:28:03.961 20:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:03.961 20:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:03.961 20:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3845165' 00:28:03.961 killing process with pid 3845165 00:28:03.961 20:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3845165 00:28:03.961 Received shutdown signal, test time was about 2.000000 seconds 00:28:03.961 00:28:03.961 Latency(us) 00:28:03.961 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:03.961 =================================================================================================================== 00:28:03.961 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:03.961 20:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3845165 00:28:04.222 20:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:04.222 20:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:04.222 20:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:04.222 20:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:04.222 20:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:04.222 20:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:04.222 20:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:04.222 20:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3845917 00:28:04.222 20:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3845917 /var/tmp/bperf.sock 00:28:04.222 20:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3845917 ']' 00:28:04.222 20:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:04.222 20:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:04.222 20:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:04.222 20:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:04.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:04.222 20:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:04.222 20:07:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:04.222 [2024-07-24 20:07:51.980128] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:28:04.222 [2024-07-24 20:07:51.980182] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3845917 ] 00:28:04.222 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:04.222 Zero copy mechanism will not be used. 00:28:04.222 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.222 [2024-07-24 20:07:52.053376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.222 [2024-07-24 20:07:52.105181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:04.794 20:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:04.794 20:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:04.794 20:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:04.794 20:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:04.794 20:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:05.054 20:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:05.054 20:07:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:05.315 nvme0n1 00:28:05.315 20:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:05.315 20:07:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:05.604 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:05.604 Zero copy mechanism will not be used. 00:28:05.604 Running I/O for 2 seconds... 00:28:07.517 00:28:07.517 Latency(us) 00:28:07.517 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:07.517 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:07.517 nvme0n1 : 2.01 2737.14 342.14 0.00 0.00 5835.52 4341.76 26542.08 00:28:07.517 =================================================================================================================== 00:28:07.517 Total : 2737.14 342.14 0.00 0.00 5835.52 4341.76 26542.08 00:28:07.517 0 00:28:07.517 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:07.517 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:07.517 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:07.517 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:07.517 | select(.opcode=="crc32c") 00:28:07.517 | "\(.module_name) \(.executed)"' 00:28:07.517 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:07.778 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:07.778 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:07.778 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:07.778 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:07.778 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3845917 00:28:07.778 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3845917 ']' 00:28:07.778 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3845917 00:28:07.778 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:07.778 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:07.779 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3845917 00:28:07.779 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:07.779 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:07.779 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3845917' 00:28:07.779 killing process with pid 3845917 00:28:07.779 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3845917 00:28:07.779 Received shutdown signal, test time was about 2.000000 seconds 00:28:07.779 00:28:07.779 Latency(us) 00:28:07.779 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:07.779 =================================================================================================================== 00:28:07.779 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:07.779 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3845917 00:28:07.779 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3843511 00:28:07.779 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3843511 ']' 00:28:07.779 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3843511 00:28:07.779 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:07.779 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:07.779 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3843511 00:28:07.779 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:07.779 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:07.779 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3843511' 00:28:07.779 killing process with pid 3843511 00:28:07.779 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3843511 00:28:07.779 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3843511 00:28:08.040 00:28:08.040 real 0m16.305s 00:28:08.040 user 0m32.191s 00:28:08.040 sys 0m3.164s 00:28:08.040 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:08.040 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:08.040 ************************************ 00:28:08.040 END TEST nvmf_digest_clean 00:28:08.040 ************************************ 00:28:08.040 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:08.040 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:08.040 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:08.040 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:08.040 ************************************ 00:28:08.040 START TEST nvmf_digest_error 00:28:08.040 ************************************ 00:28:08.040 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:28:08.040 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:08.040 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:08.040 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:08.040 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:08.040 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3846630 00:28:08.040 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3846630 00:28:08.040 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:08.040 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3846630 ']' 00:28:08.040 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:08.040 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:08.040 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:08.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:08.040 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:08.040 20:07:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:08.040 [2024-07-24 20:07:55.988561] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:28:08.040 [2024-07-24 20:07:55.988612] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:08.301 EAL: No free 2048 kB hugepages reported on node 1 00:28:08.301 [2024-07-24 20:07:56.052828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:08.301 [2024-07-24 20:07:56.117722] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:08.301 [2024-07-24 20:07:56.117758] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:08.301 [2024-07-24 20:07:56.117765] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:08.301 [2024-07-24 20:07:56.117772] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:08.301 [2024-07-24 20:07:56.117777] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:08.301 [2024-07-24 20:07:56.117801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:08.873 20:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:08.873 20:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:08.873 20:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:08.873 20:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:08.873 20:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:08.873 20:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:08.873 20:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:08.873 20:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.873 20:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:08.873 [2024-07-24 20:07:56.783706] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:08.873 20:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.873 20:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:08.873 20:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:08.873 20:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.873 20:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:09.134 null0 00:28:09.134 [2024-07-24 20:07:56.864541] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:09.134 [2024-07-24 20:07:56.888748] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:09.134 20:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.134 20:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:09.134 20:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:09.134 20:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:09.134 20:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:09.134 20:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:09.134 20:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3846969 00:28:09.134 20:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3846969 /var/tmp/bperf.sock 00:28:09.134 20:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3846969 ']' 00:28:09.134 20:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:09.134 20:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:09.134 20:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:09.134 20:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:09.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:09.134 20:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:09.134 20:07:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:09.134 [2024-07-24 20:07:56.943801] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:28:09.134 [2024-07-24 20:07:56.943861] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3846969 ] 00:28:09.134 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.134 [2024-07-24 20:07:57.024981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.134 [2024-07-24 20:07:57.078532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:10.076 20:07:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:10.076 20:07:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:10.076 20:07:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:10.076 20:07:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:10.076 20:07:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:10.076 20:07:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.076 20:07:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:10.076 20:07:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.076 20:07:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:10.076 20:07:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:10.337 nvme0n1 00:28:10.337 20:07:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:10.337 20:07:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.337 20:07:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:10.337 20:07:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.337 20:07:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:10.337 20:07:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:10.598 Running I/O for 2 seconds... 00:28:10.598 [2024-07-24 20:07:58.316851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:10.598 [2024-07-24 20:07:58.316883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.598 [2024-07-24 20:07:58.316893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.598 [2024-07-24 20:07:58.330884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:10.598 [2024-07-24 20:07:58.330904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.598 [2024-07-24 20:07:58.330911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.598 [2024-07-24 20:07:58.342854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:10.598 [2024-07-24 20:07:58.342872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.598 [2024-07-24 20:07:58.342879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.598 [2024-07-24 20:07:58.355570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:10.598 [2024-07-24 20:07:58.355590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.598 [2024-07-24 20:07:58.355596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.598 [2024-07-24 20:07:58.368653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:10.598 [2024-07-24 20:07:58.368672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.598 [2024-07-24 20:07:58.368678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.598 [2024-07-24 20:07:58.379955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:10.598 [2024-07-24 20:07:58.379974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.598 [2024-07-24 20:07:58.379981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.598 [2024-07-24 20:07:58.392277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:10.598 [2024-07-24 20:07:58.392295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.598 [2024-07-24 20:07:58.392302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.598 [2024-07-24 20:07:58.405361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:10.598 [2024-07-24 20:07:58.405380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.598 [2024-07-24 20:07:58.405386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.598 [2024-07-24 20:07:58.416295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:10.598 [2024-07-24 20:07:58.416313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.598 [2024-07-24 20:07:58.416320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.598 [2024-07-24 20:07:58.429516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:10.598 [2024-07-24 20:07:58.429533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.598 [2024-07-24 20:07:58.429540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.598 [2024-07-24 20:07:58.442055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:10.598 [2024-07-24 20:07:58.442073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.598 [2024-07-24 20:07:58.442080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.598 [2024-07-24 20:07:58.454003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:10.598 [2024-07-24 20:07:58.454021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.598 [2024-07-24 20:07:58.454027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.598 [2024-07-24 20:07:58.465620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:10.598 [2024-07-24 20:07:58.465638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.598 [2024-07-24 20:07:58.465644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.598 [2024-07-24 20:07:58.477882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:10.598 [2024-07-24 20:07:58.477900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.599 [2024-07-24 20:07:58.477907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.599 [2024-07-24 20:07:58.490657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:10.599 [2024-07-24 20:07:58.490675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.599 [2024-07-24 20:07:58.490682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.599 [2024-07-24 20:07:58.503368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:10.599 [2024-07-24 20:07:58.503387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.599 [2024-07-24 20:07:58.503397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.599 [2024-07-24 20:07:58.515526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:10.599 [2024-07-24 20:07:58.515544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.599 [2024-07-24 20:07:58.515551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.599 [2024-07-24 20:07:58.527536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:10.599 [2024-07-24 20:07:58.527554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.599 [2024-07-24 20:07:58.527560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.599 [2024-07-24 20:07:58.540096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:10.599 [2024-07-24 20:07:58.540114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.599 [2024-07-24 20:07:58.540120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.860 [2024-07-24 20:07:58.552505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:10.860 [2024-07-24 20:07:58.552523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.860 [2024-07-24 20:07:58.552529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.860 [2024-07-24 20:07:58.564887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:10.860 [2024-07-24 20:07:58.564905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.860 [2024-07-24 20:07:58.564912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.860 [2024-07-24 20:07:58.576541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:10.860 [2024-07-24 20:07:58.576558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.860 [2024-07-24 20:07:58.576564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.860 [2024-07-24 20:07:58.589067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:10.860 [2024-07-24 20:07:58.589084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.860 [2024-07-24 20:07:58.589090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.860 [2024-07-24 20:07:58.600566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:10.860 [2024-07-24 20:07:58.600584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.860 [2024-07-24 20:07:58.600590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.860 [2024-07-24 20:07:58.613469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:10.860 [2024-07-24 20:07:58.613491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.860 [2024-07-24 20:07:58.613498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.860 [2024-07-24 20:07:58.624728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:10.860 [2024-07-24 20:07:58.624746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.860 [2024-07-24 20:07:58.624752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.860 [2024-07-24 20:07:58.639279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:10.860 [2024-07-24 20:07:58.639297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.860 [2024-07-24 20:07:58.639304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.860 [2024-07-24 20:07:58.651916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:10.860 [2024-07-24 20:07:58.651934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.860 [2024-07-24 20:07:58.651941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.860 [2024-07-24 20:07:58.662688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:10.860 [2024-07-24 20:07:58.662705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.860 [2024-07-24 20:07:58.662712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.860 [2024-07-24 20:07:58.675244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:10.860 [2024-07-24 20:07:58.675261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.860 [2024-07-24 20:07:58.675267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.860 [2024-07-24 20:07:58.687110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:10.860 [2024-07-24 20:07:58.687127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.860 [2024-07-24 20:07:58.687134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.860 [2024-07-24 20:07:58.699559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:10.860 [2024-07-24 20:07:58.699576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.860 [2024-07-24 20:07:58.699582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.860 [2024-07-24 20:07:58.712179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:10.860 [2024-07-24 20:07:58.712196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.860 [2024-07-24 20:07:58.712210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.860 [2024-07-24 20:07:58.724434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:10.860 [2024-07-24 20:07:58.724451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.861 [2024-07-24 20:07:58.724457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.861 [2024-07-24 20:07:58.736732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:10.861 [2024-07-24 20:07:58.736750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.861 [2024-07-24 20:07:58.736756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.861 [2024-07-24 20:07:58.747736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:10.861 [2024-07-24 20:07:58.747753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.861 [2024-07-24 20:07:58.747759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.861 [2024-07-24 20:07:58.760585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:10.861 [2024-07-24 20:07:58.760602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.861 [2024-07-24 20:07:58.760608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.861 [2024-07-24 20:07:58.773816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:10.861 [2024-07-24 20:07:58.773834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.861 [2024-07-24 20:07:58.773840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.861 [2024-07-24 20:07:58.785446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:10.861 [2024-07-24 20:07:58.785463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.861 [2024-07-24 20:07:58.785470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.861 [2024-07-24 20:07:58.797974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:10.861 [2024-07-24 20:07:58.797990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.861 [2024-07-24 20:07:58.797996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.861 [2024-07-24 20:07:58.811852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:10.861 [2024-07-24 20:07:58.811870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.861 [2024-07-24 20:07:58.811877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.122 [2024-07-24 20:07:58.824806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.122 [2024-07-24 20:07:58.824827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.122 [2024-07-24 20:07:58.824833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.122 [2024-07-24 20:07:58.835777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.122 [2024-07-24 20:07:58.835794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.122 [2024-07-24 20:07:58.835801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.122 [2024-07-24 20:07:58.847681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.122 [2024-07-24 20:07:58.847699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.122 [2024-07-24 20:07:58.847705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.122 [2024-07-24 20:07:58.862019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.122 [2024-07-24 20:07:58.862037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.122 [2024-07-24 20:07:58.862043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.122 [2024-07-24 20:07:58.872594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.122 [2024-07-24 20:07:58.872611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.122 [2024-07-24 20:07:58.872617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.122 [2024-07-24 20:07:58.885672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.122 [2024-07-24 20:07:58.885689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.122 [2024-07-24 20:07:58.885696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.122 [2024-07-24 20:07:58.897650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.122 [2024-07-24 20:07:58.897666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.122 [2024-07-24 20:07:58.897673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.122 [2024-07-24 20:07:58.909749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.122 [2024-07-24 20:07:58.909766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.122 [2024-07-24 20:07:58.909773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.122 [2024-07-24 20:07:58.921860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.122 [2024-07-24 20:07:58.921877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.122 [2024-07-24 20:07:58.921884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.122 [2024-07-24 20:07:58.935780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.122 [2024-07-24 20:07:58.935798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.122 [2024-07-24 20:07:58.935804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.122 [2024-07-24 20:07:58.947071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.122 [2024-07-24 20:07:58.947089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.122 [2024-07-24 20:07:58.947095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.122 [2024-07-24 20:07:58.959183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.122 [2024-07-24 20:07:58.959204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.122 [2024-07-24 20:07:58.959211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.122 [2024-07-24 20:07:58.970696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.122 [2024-07-24 20:07:58.970713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.122 [2024-07-24 20:07:58.970719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.122 [2024-07-24 20:07:58.982420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.122 [2024-07-24 20:07:58.982437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.122 [2024-07-24 20:07:58.982444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.122 [2024-07-24 20:07:58.996568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.122 [2024-07-24 20:07:58.996586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.123 [2024-07-24 20:07:58.996592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.123 [2024-07-24 20:07:59.008582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.123 [2024-07-24 20:07:59.008599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.123 [2024-07-24 20:07:59.008606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.123 [2024-07-24 20:07:59.020948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.123 [2024-07-24 20:07:59.020964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.123 [2024-07-24 20:07:59.020971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.123 [2024-07-24 20:07:59.032026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.123 [2024-07-24 20:07:59.032043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.123 [2024-07-24 20:07:59.032053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.123 [2024-07-24 20:07:59.043804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.123 [2024-07-24 20:07:59.043822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.123 [2024-07-24 20:07:59.043828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.123 [2024-07-24 20:07:59.057763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.123 [2024-07-24 20:07:59.057780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.123 [2024-07-24 20:07:59.057786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.123 [2024-07-24 20:07:59.070248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.123 [2024-07-24 20:07:59.070265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.123 [2024-07-24 20:07:59.070271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.384 [2024-07-24 20:07:59.081849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.384 [2024-07-24 20:07:59.081867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.384 [2024-07-24 20:07:59.081873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.384 [2024-07-24 20:07:59.095722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.384 [2024-07-24 20:07:59.095739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.384 [2024-07-24 20:07:59.095746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.384 [2024-07-24 20:07:59.106161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.384 [2024-07-24 20:07:59.106178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.384 [2024-07-24 20:07:59.106184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.384 [2024-07-24 20:07:59.119409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.384 [2024-07-24 20:07:59.119426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.384 [2024-07-24 20:07:59.119433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.384 [2024-07-24 20:07:59.131714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.384 [2024-07-24 20:07:59.131731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.384 [2024-07-24 20:07:59.131738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.384 [2024-07-24 20:07:59.143746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.384 [2024-07-24 20:07:59.143766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.384 [2024-07-24 20:07:59.143773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.384 [2024-07-24 20:07:59.155247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.384 [2024-07-24 20:07:59.155264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.384 [2024-07-24 20:07:59.155271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.384 [2024-07-24 20:07:59.167524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.384 [2024-07-24 20:07:59.167541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.384 [2024-07-24 20:07:59.167547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.384 [2024-07-24 20:07:59.179075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.384 [2024-07-24 20:07:59.179092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.384 [2024-07-24 20:07:59.179098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.384 [2024-07-24 20:07:59.192319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.384 [2024-07-24 20:07:59.192336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.384 [2024-07-24 20:07:59.192342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.384 [2024-07-24 20:07:59.204433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.384 [2024-07-24 20:07:59.204450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.384 [2024-07-24 20:07:59.204457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.384 [2024-07-24 20:07:59.216018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.384 [2024-07-24 20:07:59.216036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.384 [2024-07-24 20:07:59.216042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.384 [2024-07-24 20:07:59.229154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.384 [2024-07-24 20:07:59.229171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.384 [2024-07-24 20:07:59.229178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.384 [2024-07-24 20:07:59.241698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.384 [2024-07-24 20:07:59.241715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.384 [2024-07-24 20:07:59.241722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.384 [2024-07-24 20:07:59.253917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.385 [2024-07-24 20:07:59.253935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.385 [2024-07-24 20:07:59.253941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.385 [2024-07-24 20:07:59.265467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.385 [2024-07-24 20:07:59.265484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.385 [2024-07-24 20:07:59.265490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.385 [2024-07-24 20:07:59.277951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.385 [2024-07-24 20:07:59.277968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.385 [2024-07-24 20:07:59.277974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.385 [2024-07-24 20:07:59.289830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.385 [2024-07-24 20:07:59.289846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.385 [2024-07-24 20:07:59.289853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.385 [2024-07-24 20:07:59.301362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.385 [2024-07-24 20:07:59.301379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.385 [2024-07-24 20:07:59.301385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.385 [2024-07-24 20:07:59.315777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.385 [2024-07-24 20:07:59.315793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.385 [2024-07-24 20:07:59.315799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.385 [2024-07-24 20:07:59.327658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.385 [2024-07-24 20:07:59.327675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.385 [2024-07-24 20:07:59.327681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.646 [2024-07-24 20:07:59.340154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.646 [2024-07-24 20:07:59.340170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.646 [2024-07-24 20:07:59.340177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.646 [2024-07-24 20:07:59.350961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.646 [2024-07-24 20:07:59.350979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.646 [2024-07-24 20:07:59.350989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.646 [2024-07-24 20:07:59.363080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.646 [2024-07-24 20:07:59.363097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.646 [2024-07-24 20:07:59.363104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.646 [2024-07-24 20:07:59.376278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.646 [2024-07-24 20:07:59.376295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.646 [2024-07-24 20:07:59.376302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.646 [2024-07-24 20:07:59.388222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.646 [2024-07-24 20:07:59.388241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.646 [2024-07-24 20:07:59.388248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.646 [2024-07-24 20:07:59.400310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.646 [2024-07-24 20:07:59.400329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.646 [2024-07-24 20:07:59.400336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.646 [2024-07-24 20:07:59.411902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.646 [2024-07-24 20:07:59.411919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.646 [2024-07-24 20:07:59.411926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.646 [2024-07-24 20:07:59.424563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.646 [2024-07-24 20:07:59.424581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.646 [2024-07-24 20:07:59.424587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.646 [2024-07-24 20:07:59.437248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.646 [2024-07-24 20:07:59.437266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.646 [2024-07-24 20:07:59.437272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.646 [2024-07-24 20:07:59.448580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.646 [2024-07-24 20:07:59.448597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.646 [2024-07-24 20:07:59.448604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.646 [2024-07-24 20:07:59.462036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.646 [2024-07-24 20:07:59.462057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.646 [2024-07-24 20:07:59.462063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.646 [2024-07-24 20:07:59.473019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.646 [2024-07-24 20:07:59.473036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.646 [2024-07-24 20:07:59.473043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.646 [2024-07-24 20:07:59.486153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.646 [2024-07-24 20:07:59.486170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.646 [2024-07-24 20:07:59.486176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.646 [2024-07-24 20:07:59.497708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.646 [2024-07-24 20:07:59.497726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.646 [2024-07-24 20:07:59.497732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.646 [2024-07-24 20:07:59.511477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.646 [2024-07-24 20:07:59.511494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.646 [2024-07-24 20:07:59.511500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.646 [2024-07-24 20:07:59.523341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.646 [2024-07-24 20:07:59.523357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.646 [2024-07-24 20:07:59.523363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.646 [2024-07-24 20:07:59.534763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.646 [2024-07-24 20:07:59.534780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.646 [2024-07-24 20:07:59.534787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.646 [2024-07-24 20:07:59.546511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.646 [2024-07-24 20:07:59.546528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.647 [2024-07-24 20:07:59.546535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.647 [2024-07-24 20:07:59.558396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.647 [2024-07-24 20:07:59.558413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.647 [2024-07-24 20:07:59.558423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.647 [2024-07-24 20:07:59.571050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.647 [2024-07-24 20:07:59.571067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.647 [2024-07-24 20:07:59.571074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.647 [2024-07-24 20:07:59.583869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.647 [2024-07-24 20:07:59.583886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.647 [2024-07-24 20:07:59.583893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.647 [2024-07-24 20:07:59.596539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.647 [2024-07-24 20:07:59.596555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.647 [2024-07-24 20:07:59.596562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.908 [2024-07-24 20:07:59.608890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.908 [2024-07-24 20:07:59.608907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.908 [2024-07-24 20:07:59.608914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.908 [2024-07-24 20:07:59.619705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.908 [2024-07-24 20:07:59.619721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:91 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.908 [2024-07-24 20:07:59.619728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.908 [2024-07-24 20:07:59.633174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.908 [2024-07-24 20:07:59.633191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.908 [2024-07-24 20:07:59.633197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.908 [2024-07-24 20:07:59.645486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.908 [2024-07-24 20:07:59.645502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.908 [2024-07-24 20:07:59.645508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.908 [2024-07-24 20:07:59.657259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.908 [2024-07-24 20:07:59.657277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.908 [2024-07-24 20:07:59.657284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.908 [2024-07-24 20:07:59.669478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.908 [2024-07-24 20:07:59.669499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.908 [2024-07-24 20:07:59.669506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.908 [2024-07-24 20:07:59.681504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.908 [2024-07-24 20:07:59.681521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.908 [2024-07-24 20:07:59.681528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.909 [2024-07-24 20:07:59.693846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.909 [2024-07-24 20:07:59.693863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.909 [2024-07-24 20:07:59.693870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.909 [2024-07-24 20:07:59.706865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.909 [2024-07-24 20:07:59.706882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.909 [2024-07-24 20:07:59.706888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.909 [2024-07-24 20:07:59.718580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.909 [2024-07-24 20:07:59.718598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.909 [2024-07-24 20:07:59.718604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.909 [2024-07-24 20:07:59.730900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.909 [2024-07-24 20:07:59.730917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.909 [2024-07-24 20:07:59.730923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.909 [2024-07-24 20:07:59.743388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.909 [2024-07-24 20:07:59.743404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:68 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.909 [2024-07-24 20:07:59.743410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.909 [2024-07-24 20:07:59.754890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.909 [2024-07-24 20:07:59.754907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.909 [2024-07-24 20:07:59.754914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.909 [2024-07-24 20:07:59.766784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.909 [2024-07-24 20:07:59.766802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.909 [2024-07-24 20:07:59.766808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.909 [2024-07-24 20:07:59.778916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.909 [2024-07-24 20:07:59.778933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.909 [2024-07-24 20:07:59.778940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.909 [2024-07-24 20:07:59.791698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.909 [2024-07-24 20:07:59.791716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.909 [2024-07-24 20:07:59.791723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.909 [2024-07-24 20:07:59.804879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.909 [2024-07-24 20:07:59.804897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.909 [2024-07-24 20:07:59.804903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.909 [2024-07-24 20:07:59.816793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.909 [2024-07-24 20:07:59.816811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.909 [2024-07-24 20:07:59.816818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.909 [2024-07-24 20:07:59.829881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.909 [2024-07-24 20:07:59.829898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.909 [2024-07-24 20:07:59.829905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.909 [2024-07-24 20:07:59.841148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.909 [2024-07-24 20:07:59.841165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.909 [2024-07-24 20:07:59.841171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.909 [2024-07-24 20:07:59.854158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:11.909 [2024-07-24 20:07:59.854176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.909 [2024-07-24 20:07:59.854182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.171 [2024-07-24 20:07:59.865800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:12.171 [2024-07-24 20:07:59.865817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.171 [2024-07-24 20:07:59.865823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.171 [2024-07-24 20:07:59.878336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:12.171 [2024-07-24 20:07:59.878354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.171 [2024-07-24 20:07:59.878364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.171 [2024-07-24 20:07:59.890050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:12.171 [2024-07-24 20:07:59.890067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.171 [2024-07-24 20:07:59.890074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.171 [2024-07-24 20:07:59.902098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:12.171 [2024-07-24 20:07:59.902115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.171 [2024-07-24 20:07:59.902122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.171 [2024-07-24 20:07:59.914275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:12.171 [2024-07-24 20:07:59.914292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.171 [2024-07-24 20:07:59.914299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.171 [2024-07-24 20:07:59.927234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:12.171 [2024-07-24 20:07:59.927251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.171 [2024-07-24 20:07:59.927258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.171 [2024-07-24 20:07:59.939721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:12.171 [2024-07-24 20:07:59.939738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.171 [2024-07-24 20:07:59.939745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.171 [2024-07-24 20:07:59.951409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:12.171 [2024-07-24 20:07:59.951426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.171 [2024-07-24 20:07:59.951433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.171 [2024-07-24 20:07:59.965000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:12.171 [2024-07-24 20:07:59.965018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.171 [2024-07-24 20:07:59.965025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.171 [2024-07-24 20:07:59.977210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:12.171 [2024-07-24 20:07:59.977227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.171 [2024-07-24 20:07:59.977234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.171 [2024-07-24 20:07:59.988025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:12.171 [2024-07-24 20:07:59.988046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.171 [2024-07-24 20:07:59.988052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.171 [2024-07-24 20:08:00.000497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:12.171 [2024-07-24 20:08:00.000514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.171 [2024-07-24 20:08:00.000521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.171 [2024-07-24 20:08:00.014499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:12.171 [2024-07-24 20:08:00.014520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.171 [2024-07-24 20:08:00.014527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.171 [2024-07-24 20:08:00.026975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:12.171 [2024-07-24 20:08:00.026996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.171 [2024-07-24 20:08:00.027003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.171 [2024-07-24 20:08:00.038420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:12.171 [2024-07-24 20:08:00.038438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.171 [2024-07-24 20:08:00.038445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.171 [2024-07-24 20:08:00.050019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:12.171 [2024-07-24 20:08:00.050037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.171 [2024-07-24 20:08:00.050044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.171 [2024-07-24 20:08:00.063609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:12.171 [2024-07-24 20:08:00.063627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.171 [2024-07-24 20:08:00.063633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.171 [2024-07-24 20:08:00.075812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:12.171 [2024-07-24 20:08:00.075838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.171 [2024-07-24 20:08:00.075850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.171 [2024-07-24 20:08:00.086907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:12.171 [2024-07-24 20:08:00.086926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.171 [2024-07-24 20:08:00.086932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.171 [2024-07-24 20:08:00.099086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:12.171 [2024-07-24 20:08:00.099104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.171 [2024-07-24 20:08:00.099111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.171 [2024-07-24 20:08:00.111865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:12.171 [2024-07-24 20:08:00.111882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.171 [2024-07-24 20:08:00.111888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.171 [2024-07-24 20:08:00.123148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:12.171 [2024-07-24 20:08:00.123166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.171 [2024-07-24 20:08:00.123173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.432 [2024-07-24 20:08:00.135451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:12.432 [2024-07-24 20:08:00.135468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.432 [2024-07-24 20:08:00.135475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.432 [2024-07-24 20:08:00.147595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:12.432 [2024-07-24 20:08:00.147612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.432 [2024-07-24 20:08:00.147619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.432 [2024-07-24 20:08:00.161130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:12.432 [2024-07-24 20:08:00.161147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.433 [2024-07-24 20:08:00.161153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.433 [2024-07-24 20:08:00.173181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:12.433 [2024-07-24 20:08:00.173198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.433 [2024-07-24 20:08:00.173209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.433 [2024-07-24 20:08:00.185260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:12.433 [2024-07-24 20:08:00.185278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.433 [2024-07-24 20:08:00.185285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.433 [2024-07-24 20:08:00.197586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:12.433 [2024-07-24 20:08:00.197604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.433 [2024-07-24 20:08:00.197615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.433 [2024-07-24 20:08:00.210049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:12.433 [2024-07-24 20:08:00.210067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.433 [2024-07-24 20:08:00.210074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.433 [2024-07-24 20:08:00.220977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:12.433 [2024-07-24 20:08:00.220995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:17568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.433 [2024-07-24 20:08:00.221002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.433 [2024-07-24 20:08:00.233614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:12.433 [2024-07-24 20:08:00.233632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.433 [2024-07-24 20:08:00.233638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.433 [2024-07-24 20:08:00.246411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:12.433 [2024-07-24 20:08:00.246429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.433 [2024-07-24 20:08:00.246436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.433 [2024-07-24 20:08:00.259572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:12.433 [2024-07-24 20:08:00.259590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.433 [2024-07-24 20:08:00.259596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.433 [2024-07-24 20:08:00.271626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:12.433 [2024-07-24 20:08:00.271643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.433 [2024-07-24 20:08:00.271649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.433 [2024-07-24 20:08:00.282442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:12.433 [2024-07-24 20:08:00.282461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.433 [2024-07-24 20:08:00.282467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.433 [2024-07-24 20:08:00.295448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa8acd0) 00:28:12.433 [2024-07-24 20:08:00.295466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.433 [2024-07-24 20:08:00.295472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.433 00:28:12.433 Latency(us) 00:28:12.433 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:12.433 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:12.433 nvme0n1 : 2.00 20701.91 80.87 0.00 0.00 6175.76 3495.25 18786.99 00:28:12.433 =================================================================================================================== 00:28:12.433 Total : 20701.91 80.87 0.00 0.00 6175.76 3495.25 18786.99 00:28:12.433 0 00:28:12.433 20:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:12.433 20:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:12.433 20:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:12.433 | .driver_specific 00:28:12.433 | .nvme_error 00:28:12.433 | .status_code 00:28:12.433 | .command_transient_transport_error' 00:28:12.433 20:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:12.693 20:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 162 > 0 )) 00:28:12.693 20:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3846969 00:28:12.693 20:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3846969 ']' 00:28:12.693 20:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3846969 00:28:12.693 20:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:12.693 20:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:12.693 20:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3846969 00:28:12.693 20:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:12.693 20:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:12.693 20:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3846969' 00:28:12.693 killing process with pid 3846969 00:28:12.693 20:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3846969 00:28:12.693 Received shutdown signal, test time was about 2.000000 seconds 00:28:12.693 00:28:12.693 Latency(us) 00:28:12.693 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:12.693 =================================================================================================================== 00:28:12.693 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:12.694 20:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3846969 00:28:12.954 20:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:12.954 20:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:12.954 20:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:12.954 20:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:12.954 20:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:12.954 20:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3847654 00:28:12.954 20:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3847654 /var/tmp/bperf.sock 00:28:12.954 20:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3847654 ']' 00:28:12.954 20:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:12.954 20:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:12.954 20:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:12.954 20:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:12.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:12.954 20:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:12.954 20:08:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:12.954 [2024-07-24 20:08:00.706847] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:28:12.954 [2024-07-24 20:08:00.706902] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3847654 ] 00:28:12.954 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:12.955 Zero copy mechanism will not be used. 00:28:12.955 EAL: No free 2048 kB hugepages reported on node 1 00:28:12.955 [2024-07-24 20:08:00.781320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.955 [2024-07-24 20:08:00.834287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:13.896 20:08:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:13.896 20:08:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:13.896 20:08:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:13.896 20:08:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:13.896 20:08:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:13.896 20:08:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.896 20:08:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:13.896 20:08:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.896 20:08:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:13.896 20:08:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:14.157 nvme0n1 00:28:14.157 20:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:14.157 20:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.157 20:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:14.157 20:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.157 20:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:14.157 20:08:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:14.417 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:14.417 Zero copy mechanism will not be used. 00:28:14.417 Running I/O for 2 seconds... 00:28:14.417 [2024-07-24 20:08:02.150570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.417 [2024-07-24 20:08:02.150601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.417 [2024-07-24 20:08:02.150610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.417 [2024-07-24 20:08:02.168312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.417 [2024-07-24 20:08:02.168335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.417 [2024-07-24 20:08:02.168342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.417 [2024-07-24 20:08:02.184986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.418 [2024-07-24 20:08:02.185005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.418 [2024-07-24 20:08:02.185011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.418 [2024-07-24 20:08:02.203212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.418 [2024-07-24 20:08:02.203230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.418 [2024-07-24 20:08:02.203237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.418 [2024-07-24 20:08:02.218737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.418 [2024-07-24 20:08:02.218755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.418 [2024-07-24 20:08:02.218762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.418 [2024-07-24 20:08:02.230404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.418 [2024-07-24 20:08:02.230423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.418 [2024-07-24 20:08:02.230430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.418 [2024-07-24 20:08:02.245220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.418 [2024-07-24 20:08:02.245238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.418 [2024-07-24 20:08:02.245244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.418 [2024-07-24 20:08:02.263930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.418 [2024-07-24 20:08:02.263947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.418 [2024-07-24 20:08:02.263954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.418 [2024-07-24 20:08:02.278495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.418 [2024-07-24 20:08:02.278512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.418 [2024-07-24 20:08:02.278524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.418 [2024-07-24 20:08:02.293804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.418 [2024-07-24 20:08:02.293821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.418 [2024-07-24 20:08:02.293828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.418 [2024-07-24 20:08:02.311711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.418 [2024-07-24 20:08:02.311728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.418 [2024-07-24 20:08:02.311735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.418 [2024-07-24 20:08:02.328025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.418 [2024-07-24 20:08:02.328043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.418 [2024-07-24 20:08:02.328049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.418 [2024-07-24 20:08:02.343022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.418 [2024-07-24 20:08:02.343038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.418 [2024-07-24 20:08:02.343045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.418 [2024-07-24 20:08:02.359621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.418 [2024-07-24 20:08:02.359638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.418 [2024-07-24 20:08:02.359644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.679 [2024-07-24 20:08:02.377536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.679 [2024-07-24 20:08:02.377554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.679 [2024-07-24 20:08:02.377560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.679 [2024-07-24 20:08:02.393330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.679 [2024-07-24 20:08:02.393347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.679 [2024-07-24 20:08:02.393353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.679 [2024-07-24 20:08:02.407970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.679 [2024-07-24 20:08:02.407987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.679 [2024-07-24 20:08:02.407994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.679 [2024-07-24 20:08:02.424048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.679 [2024-07-24 20:08:02.424069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.679 [2024-07-24 20:08:02.424076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.679 [2024-07-24 20:08:02.440530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.679 [2024-07-24 20:08:02.440547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.679 [2024-07-24 20:08:02.440554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.679 [2024-07-24 20:08:02.458284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.679 [2024-07-24 20:08:02.458301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.679 [2024-07-24 20:08:02.458307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.679 [2024-07-24 20:08:02.475049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.679 [2024-07-24 20:08:02.475066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.679 [2024-07-24 20:08:02.475073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.679 [2024-07-24 20:08:02.489771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.679 [2024-07-24 20:08:02.489788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.679 [2024-07-24 20:08:02.489794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.679 [2024-07-24 20:08:02.505138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.679 [2024-07-24 20:08:02.505155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.679 [2024-07-24 20:08:02.505161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.679 [2024-07-24 20:08:02.521051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.679 [2024-07-24 20:08:02.521068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.679 [2024-07-24 20:08:02.521074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.679 [2024-07-24 20:08:02.537865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.679 [2024-07-24 20:08:02.537882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.679 [2024-07-24 20:08:02.537888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.679 [2024-07-24 20:08:02.553014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.679 [2024-07-24 20:08:02.553031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.679 [2024-07-24 20:08:02.553041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.679 [2024-07-24 20:08:02.570550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.679 [2024-07-24 20:08:02.570567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.679 [2024-07-24 20:08:02.570574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.679 [2024-07-24 20:08:02.586055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.679 [2024-07-24 20:08:02.586072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.680 [2024-07-24 20:08:02.586078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.680 [2024-07-24 20:08:02.601892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.680 [2024-07-24 20:08:02.601910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.680 [2024-07-24 20:08:02.601916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.680 [2024-07-24 20:08:02.618844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.680 [2024-07-24 20:08:02.618862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.680 [2024-07-24 20:08:02.618869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.941 [2024-07-24 20:08:02.635039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.941 [2024-07-24 20:08:02.635057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.941 [2024-07-24 20:08:02.635063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.941 [2024-07-24 20:08:02.652320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.941 [2024-07-24 20:08:02.652337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.941 [2024-07-24 20:08:02.652344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.941 [2024-07-24 20:08:02.668212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.941 [2024-07-24 20:08:02.668229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.941 [2024-07-24 20:08:02.668236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.941 [2024-07-24 20:08:02.683021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.941 [2024-07-24 20:08:02.683038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.941 [2024-07-24 20:08:02.683045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.941 [2024-07-24 20:08:02.701159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.941 [2024-07-24 20:08:02.701178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.941 [2024-07-24 20:08:02.701185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.941 [2024-07-24 20:08:02.717342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.941 [2024-07-24 20:08:02.717358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.941 [2024-07-24 20:08:02.717365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.941 [2024-07-24 20:08:02.734062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.941 [2024-07-24 20:08:02.734079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.941 [2024-07-24 20:08:02.734085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.941 [2024-07-24 20:08:02.750988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.941 [2024-07-24 20:08:02.751005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.941 [2024-07-24 20:08:02.751012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.941 [2024-07-24 20:08:02.767328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.941 [2024-07-24 20:08:02.767345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.941 [2024-07-24 20:08:02.767351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.941 [2024-07-24 20:08:02.782691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.941 [2024-07-24 20:08:02.782709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.941 [2024-07-24 20:08:02.782715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.941 [2024-07-24 20:08:02.798266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.941 [2024-07-24 20:08:02.798283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.941 [2024-07-24 20:08:02.798289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.941 [2024-07-24 20:08:02.812855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.941 [2024-07-24 20:08:02.812872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.941 [2024-07-24 20:08:02.812878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.941 [2024-07-24 20:08:02.829881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.941 [2024-07-24 20:08:02.829899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.941 [2024-07-24 20:08:02.829905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.941 [2024-07-24 20:08:02.848484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.941 [2024-07-24 20:08:02.848501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.941 [2024-07-24 20:08:02.848507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.941 [2024-07-24 20:08:02.860575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.941 [2024-07-24 20:08:02.860592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.941 [2024-07-24 20:08:02.860598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.941 [2024-07-24 20:08:02.877012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.941 [2024-07-24 20:08:02.877029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.941 [2024-07-24 20:08:02.877035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.941 [2024-07-24 20:08:02.891133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:14.941 [2024-07-24 20:08:02.891150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.941 [2024-07-24 20:08:02.891156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.202 [2024-07-24 20:08:02.907581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.202 [2024-07-24 20:08:02.907598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.202 [2024-07-24 20:08:02.907605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.202 [2024-07-24 20:08:02.924845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.202 [2024-07-24 20:08:02.924863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.202 [2024-07-24 20:08:02.924869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.202 [2024-07-24 20:08:02.939705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.202 [2024-07-24 20:08:02.939723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.202 [2024-07-24 20:08:02.939729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.203 [2024-07-24 20:08:02.952589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.203 [2024-07-24 20:08:02.952606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.203 [2024-07-24 20:08:02.952613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.203 [2024-07-24 20:08:02.967170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.203 [2024-07-24 20:08:02.967188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.203 [2024-07-24 20:08:02.967197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.203 [2024-07-24 20:08:02.982985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.203 [2024-07-24 20:08:02.983002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.203 [2024-07-24 20:08:02.983008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.203 [2024-07-24 20:08:02.996086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.203 [2024-07-24 20:08:02.996104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.203 [2024-07-24 20:08:02.996110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.203 [2024-07-24 20:08:03.010851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.203 [2024-07-24 20:08:03.010869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.203 [2024-07-24 20:08:03.010875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.203 [2024-07-24 20:08:03.025378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.203 [2024-07-24 20:08:03.025396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.203 [2024-07-24 20:08:03.025402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.203 [2024-07-24 20:08:03.034902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.203 [2024-07-24 20:08:03.034921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.203 [2024-07-24 20:08:03.034927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.203 [2024-07-24 20:08:03.048517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.203 [2024-07-24 20:08:03.048535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.203 [2024-07-24 20:08:03.048542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.203 [2024-07-24 20:08:03.062260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.203 [2024-07-24 20:08:03.062278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.203 [2024-07-24 20:08:03.062284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.203 [2024-07-24 20:08:03.075695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.203 [2024-07-24 20:08:03.075714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.203 [2024-07-24 20:08:03.075720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.203 [2024-07-24 20:08:03.089185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.203 [2024-07-24 20:08:03.089211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.203 [2024-07-24 20:08:03.089218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.203 [2024-07-24 20:08:03.103010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.203 [2024-07-24 20:08:03.103027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.203 [2024-07-24 20:08:03.103034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.203 [2024-07-24 20:08:03.117196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.203 [2024-07-24 20:08:03.117217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.203 [2024-07-24 20:08:03.117224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.203 [2024-07-24 20:08:03.131223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.203 [2024-07-24 20:08:03.131241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.203 [2024-07-24 20:08:03.131248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.203 [2024-07-24 20:08:03.145645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.203 [2024-07-24 20:08:03.145663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.203 [2024-07-24 20:08:03.145670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.464 [2024-07-24 20:08:03.159670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.464 [2024-07-24 20:08:03.159688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.464 [2024-07-24 20:08:03.159695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.464 [2024-07-24 20:08:03.175895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.464 [2024-07-24 20:08:03.175914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.464 [2024-07-24 20:08:03.175921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.464 [2024-07-24 20:08:03.190537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.464 [2024-07-24 20:08:03.190555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.464 [2024-07-24 20:08:03.190562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.464 [2024-07-24 20:08:03.204311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.464 [2024-07-24 20:08:03.204330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.464 [2024-07-24 20:08:03.204336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.465 [2024-07-24 20:08:03.216198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.465 [2024-07-24 20:08:03.216222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-24 20:08:03.216228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.465 [2024-07-24 20:08:03.231197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.465 [2024-07-24 20:08:03.231221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-24 20:08:03.231227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.465 [2024-07-24 20:08:03.246073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.465 [2024-07-24 20:08:03.246091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-24 20:08:03.246099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.465 [2024-07-24 20:08:03.259174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.465 [2024-07-24 20:08:03.259192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-24 20:08:03.259199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.465 [2024-07-24 20:08:03.268908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.465 [2024-07-24 20:08:03.268925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-24 20:08:03.268931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.465 [2024-07-24 20:08:03.285397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.465 [2024-07-24 20:08:03.285414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-24 20:08:03.285421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.465 [2024-07-24 20:08:03.302264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.465 [2024-07-24 20:08:03.302282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-24 20:08:03.302288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.465 [2024-07-24 20:08:03.320901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.465 [2024-07-24 20:08:03.320920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-24 20:08:03.320928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.465 [2024-07-24 20:08:03.335907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.465 [2024-07-24 20:08:03.335928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-24 20:08:03.335935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.465 [2024-07-24 20:08:03.349760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.465 [2024-07-24 20:08:03.349778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-24 20:08:03.349785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.465 [2024-07-24 20:08:03.363619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.465 [2024-07-24 20:08:03.363637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-24 20:08:03.363644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.465 [2024-07-24 20:08:03.376984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.465 [2024-07-24 20:08:03.377002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-24 20:08:03.377009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.465 [2024-07-24 20:08:03.390495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.465 [2024-07-24 20:08:03.390513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-24 20:08:03.390519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.465 [2024-07-24 20:08:03.403915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.465 [2024-07-24 20:08:03.403933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.465 [2024-07-24 20:08:03.403939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.726 [2024-07-24 20:08:03.421733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.726 [2024-07-24 20:08:03.421751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.726 [2024-07-24 20:08:03.421758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.726 [2024-07-24 20:08:03.435903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.726 [2024-07-24 20:08:03.435923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.726 [2024-07-24 20:08:03.435930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.726 [2024-07-24 20:08:03.445573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.726 [2024-07-24 20:08:03.445591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.726 [2024-07-24 20:08:03.445598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.726 [2024-07-24 20:08:03.458419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.726 [2024-07-24 20:08:03.458436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.726 [2024-07-24 20:08:03.458443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.726 [2024-07-24 20:08:03.471432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.726 [2024-07-24 20:08:03.471451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.726 [2024-07-24 20:08:03.471457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.726 [2024-07-24 20:08:03.483501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.726 [2024-07-24 20:08:03.483519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.726 [2024-07-24 20:08:03.483525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.726 [2024-07-24 20:08:03.500649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.726 [2024-07-24 20:08:03.500667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.726 [2024-07-24 20:08:03.500674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.726 [2024-07-24 20:08:03.518423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.726 [2024-07-24 20:08:03.518442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.726 [2024-07-24 20:08:03.518448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.726 [2024-07-24 20:08:03.534152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.726 [2024-07-24 20:08:03.534171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.726 [2024-07-24 20:08:03.534177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.726 [2024-07-24 20:08:03.550225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.727 [2024-07-24 20:08:03.550243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.727 [2024-07-24 20:08:03.550249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.727 [2024-07-24 20:08:03.566329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.727 [2024-07-24 20:08:03.566348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.727 [2024-07-24 20:08:03.566355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.727 [2024-07-24 20:08:03.583039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.727 [2024-07-24 20:08:03.583057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.727 [2024-07-24 20:08:03.583067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.727 [2024-07-24 20:08:03.598895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.727 [2024-07-24 20:08:03.598914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.727 [2024-07-24 20:08:03.598920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.727 [2024-07-24 20:08:03.615734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.727 [2024-07-24 20:08:03.615753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.727 [2024-07-24 20:08:03.615759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.727 [2024-07-24 20:08:03.633067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.727 [2024-07-24 20:08:03.633086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.727 [2024-07-24 20:08:03.633092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.727 [2024-07-24 20:08:03.651093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.727 [2024-07-24 20:08:03.651112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.727 [2024-07-24 20:08:03.651119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.727 [2024-07-24 20:08:03.667020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.727 [2024-07-24 20:08:03.667039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.727 [2024-07-24 20:08:03.667046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.987 [2024-07-24 20:08:03.684026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.987 [2024-07-24 20:08:03.684046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.987 [2024-07-24 20:08:03.684052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.987 [2024-07-24 20:08:03.698768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.987 [2024-07-24 20:08:03.698787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.987 [2024-07-24 20:08:03.698793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.987 [2024-07-24 20:08:03.713976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.987 [2024-07-24 20:08:03.713995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.987 [2024-07-24 20:08:03.714001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.987 [2024-07-24 20:08:03.730480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.987 [2024-07-24 20:08:03.730502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.987 [2024-07-24 20:08:03.730508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.987 [2024-07-24 20:08:03.745589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.987 [2024-07-24 20:08:03.745608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.987 [2024-07-24 20:08:03.745615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.987 [2024-07-24 20:08:03.760873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.987 [2024-07-24 20:08:03.760892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.987 [2024-07-24 20:08:03.760899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.987 [2024-07-24 20:08:03.778226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.987 [2024-07-24 20:08:03.778245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.987 [2024-07-24 20:08:03.778251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.987 [2024-07-24 20:08:03.793712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.987 [2024-07-24 20:08:03.793731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.987 [2024-07-24 20:08:03.793737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.987 [2024-07-24 20:08:03.810667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.987 [2024-07-24 20:08:03.810685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.987 [2024-07-24 20:08:03.810691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.987 [2024-07-24 20:08:03.827847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.987 [2024-07-24 20:08:03.827867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.987 [2024-07-24 20:08:03.827873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.988 [2024-07-24 20:08:03.844986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.988 [2024-07-24 20:08:03.845005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.988 [2024-07-24 20:08:03.845011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.988 [2024-07-24 20:08:03.861051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.988 [2024-07-24 20:08:03.861070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.988 [2024-07-24 20:08:03.861077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.988 [2024-07-24 20:08:03.878469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.988 [2024-07-24 20:08:03.878488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.988 [2024-07-24 20:08:03.878494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.988 [2024-07-24 20:08:03.894786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.988 [2024-07-24 20:08:03.894806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.988 [2024-07-24 20:08:03.894812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.988 [2024-07-24 20:08:03.911311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.988 [2024-07-24 20:08:03.911330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.988 [2024-07-24 20:08:03.911336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.988 [2024-07-24 20:08:03.927395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:15.988 [2024-07-24 20:08:03.927413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.988 [2024-07-24 20:08:03.927419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.249 [2024-07-24 20:08:03.945164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:16.249 [2024-07-24 20:08:03.945183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.249 [2024-07-24 20:08:03.945189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.249 [2024-07-24 20:08:03.961091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:16.249 [2024-07-24 20:08:03.961109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.249 [2024-07-24 20:08:03.961116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.249 [2024-07-24 20:08:03.975950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:16.249 [2024-07-24 20:08:03.975969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.249 [2024-07-24 20:08:03.975975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.249 [2024-07-24 20:08:03.992408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:16.249 [2024-07-24 20:08:03.992426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.249 [2024-07-24 20:08:03.992432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.249 [2024-07-24 20:08:04.006628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:16.249 [2024-07-24 20:08:04.006647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.249 [2024-07-24 20:08:04.006657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.249 [2024-07-24 20:08:04.023288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:16.249 [2024-07-24 20:08:04.023306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.249 [2024-07-24 20:08:04.023312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.249 [2024-07-24 20:08:04.040579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:16.249 [2024-07-24 20:08:04.040598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.249 [2024-07-24 20:08:04.040604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.249 [2024-07-24 20:08:04.055860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:16.249 [2024-07-24 20:08:04.055878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.249 [2024-07-24 20:08:04.055884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.249 [2024-07-24 20:08:04.071338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:16.249 [2024-07-24 20:08:04.071356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.249 [2024-07-24 20:08:04.071363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.249 [2024-07-24 20:08:04.086757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:16.249 [2024-07-24 20:08:04.086775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.249 [2024-07-24 20:08:04.086781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.249 [2024-07-24 20:08:04.103521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:16.249 [2024-07-24 20:08:04.103539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.249 [2024-07-24 20:08:04.103546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.249 [2024-07-24 20:08:04.119166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:16.249 [2024-07-24 20:08:04.119183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.249 [2024-07-24 20:08:04.119190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.249 [2024-07-24 20:08:04.135591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df9f0) 00:28:16.249 [2024-07-24 20:08:04.135609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.249 [2024-07-24 20:08:04.135615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.249 00:28:16.249 Latency(us) 00:28:16.249 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:16.249 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:16.249 nvme0n1 : 2.00 1996.53 249.57 0.00 0.00 8006.15 2184.53 19770.03 00:28:16.249 =================================================================================================================== 00:28:16.249 Total : 1996.53 249.57 0.00 0.00 8006.15 2184.53 19770.03 00:28:16.249 0 00:28:16.249 20:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:16.249 20:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:16.249 20:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:16.249 | .driver_specific 00:28:16.249 | .nvme_error 00:28:16.249 | .status_code 00:28:16.249 | .command_transient_transport_error' 00:28:16.249 20:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:16.510 20:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 129 > 0 )) 00:28:16.510 20:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3847654 00:28:16.510 20:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3847654 ']' 00:28:16.510 20:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3847654 00:28:16.510 20:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:16.510 20:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:16.510 20:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3847654 00:28:16.510 20:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:16.510 20:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:16.510 20:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3847654' 00:28:16.510 killing process with pid 3847654 00:28:16.510 20:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3847654 00:28:16.510 Received shutdown signal, test time was about 2.000000 seconds 00:28:16.510 00:28:16.510 Latency(us) 00:28:16.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:16.510 =================================================================================================================== 00:28:16.510 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:16.510 20:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3847654 00:28:16.770 20:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:16.770 20:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:16.770 20:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:16.770 20:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:16.770 20:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:16.770 20:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3848342 00:28:16.770 20:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3848342 /var/tmp/bperf.sock 00:28:16.770 20:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3848342 ']' 00:28:16.770 20:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:16.770 20:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:16.770 20:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:16.770 20:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:16.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:16.770 20:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:16.770 20:08:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:16.770 [2024-07-24 20:08:04.540162] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:28:16.770 [2024-07-24 20:08:04.540221] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3848342 ] 00:28:16.770 EAL: No free 2048 kB hugepages reported on node 1 00:28:16.770 [2024-07-24 20:08:04.614537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.770 [2024-07-24 20:08:04.666442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.712 20:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:17.713 20:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:17.713 20:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:17.713 20:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:17.713 20:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:17.713 20:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.713 20:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:17.713 20:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.713 20:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:17.713 20:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:17.973 nvme0n1 00:28:17.973 20:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:17.973 20:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.973 20:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:17.973 20:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.973 20:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:17.973 20:08:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:17.973 Running I/O for 2 seconds... 00:28:18.234 [2024-07-24 20:08:05.937288] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:18.235 [2024-07-24 20:08:05.938215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.235 [2024-07-24 20:08:05.938244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.235 [2024-07-24 20:08:05.949148] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:18.235 [2024-07-24 20:08:05.950103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.235 [2024-07-24 20:08:05.950123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.235 [2024-07-24 20:08:05.960955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:18.235 [2024-07-24 20:08:05.961931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.235 [2024-07-24 20:08:05.961947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.235 [2024-07-24 20:08:05.972736] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:18.235 [2024-07-24 20:08:05.973710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.235 [2024-07-24 20:08:05.973727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.235 [2024-07-24 20:08:05.984527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:18.235 [2024-07-24 20:08:05.985597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.235 [2024-07-24 20:08:05.985614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.235 [2024-07-24 20:08:05.996428] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:18.235 [2024-07-24 20:08:05.997381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.235 [2024-07-24 20:08:05.997397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.235 [2024-07-24 20:08:06.008178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:18.235 [2024-07-24 20:08:06.009151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.235 [2024-07-24 20:08:06.009167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.235 [2024-07-24 20:08:06.019942] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:18.235 [2024-07-24 20:08:06.020917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.235 [2024-07-24 20:08:06.020933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.235 [2024-07-24 20:08:06.031702] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:18.235 [2024-07-24 20:08:06.032680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.235 [2024-07-24 20:08:06.032696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.235 [2024-07-24 20:08:06.043457] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:18.235 [2024-07-24 20:08:06.044422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.235 [2024-07-24 20:08:06.044439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.235 [2024-07-24 20:08:06.055196] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:18.235 [2024-07-24 20:08:06.056172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.235 [2024-07-24 20:08:06.056188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.235 [2024-07-24 20:08:06.066955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:18.235 [2024-07-24 20:08:06.067935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.235 [2024-07-24 20:08:06.067951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.235 [2024-07-24 20:08:06.078701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:18.235 [2024-07-24 20:08:06.079673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.235 [2024-07-24 20:08:06.079689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.235 [2024-07-24 20:08:06.090468] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:18.235 [2024-07-24 20:08:06.091437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.235 [2024-07-24 20:08:06.091453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.235 [2024-07-24 20:08:06.102219] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:18.235 [2024-07-24 20:08:06.103184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.235 [2024-07-24 20:08:06.103204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.235 [2024-07-24 20:08:06.113956] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:18.235 [2024-07-24 20:08:06.114927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.235 [2024-07-24 20:08:06.114943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.235 [2024-07-24 20:08:06.125700] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:18.235 [2024-07-24 20:08:06.126668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.235 [2024-07-24 20:08:06.126684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.235 [2024-07-24 20:08:06.137430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:18.235 [2024-07-24 20:08:06.138385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.235 [2024-07-24 20:08:06.138405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.235 [2024-07-24 20:08:06.149160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:18.235 [2024-07-24 20:08:06.150125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.235 [2024-07-24 20:08:06.150141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.235 [2024-07-24 20:08:06.160900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:18.235 [2024-07-24 20:08:06.161870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.235 [2024-07-24 20:08:06.161888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.235 [2024-07-24 20:08:06.172624] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:18.235 [2024-07-24 20:08:06.173604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.235 [2024-07-24 20:08:06.173620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.235 [2024-07-24 20:08:06.184376] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:18.235 [2024-07-24 20:08:06.185343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.235 [2024-07-24 20:08:06.185359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.496 [2024-07-24 20:08:06.196134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:18.496 [2024-07-24 20:08:06.197115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.496 [2024-07-24 20:08:06.197131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.496 [2024-07-24 20:08:06.208071] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:18.496 [2024-07-24 20:08:06.209030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.496 [2024-07-24 20:08:06.209046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.496 [2024-07-24 20:08:06.219800] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:18.496 [2024-07-24 20:08:06.220746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.496 [2024-07-24 20:08:06.220762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.497 [2024-07-24 20:08:06.231533] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:18.497 [2024-07-24 20:08:06.232498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.497 [2024-07-24 20:08:06.232513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.497 [2024-07-24 20:08:06.243263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:18.497 [2024-07-24 20:08:06.244233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.497 [2024-07-24 20:08:06.244250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.497 [2024-07-24 20:08:06.255025] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:18.497 [2024-07-24 20:08:06.255998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.497 [2024-07-24 20:08:06.256015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.497 [2024-07-24 20:08:06.266776] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:18.497 [2024-07-24 20:08:06.267717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.497 [2024-07-24 20:08:06.267733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.497 [2024-07-24 20:08:06.278520] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:18.497 [2024-07-24 20:08:06.279503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.497 [2024-07-24 20:08:06.279519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.497 [2024-07-24 20:08:06.290266] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:18.497 [2024-07-24 20:08:06.291227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.497 [2024-07-24 20:08:06.291243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.497 [2024-07-24 20:08:06.301997] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:18.497 [2024-07-24 20:08:06.302967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.497 [2024-07-24 20:08:06.302983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.497 [2024-07-24 20:08:06.313722] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:18.497 [2024-07-24 20:08:06.314695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.497 [2024-07-24 20:08:06.314711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.497 [2024-07-24 20:08:06.325460] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:18.497 [2024-07-24 20:08:06.326429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.497 [2024-07-24 20:08:06.326445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.497 [2024-07-24 20:08:06.337188] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:18.497 [2024-07-24 20:08:06.338122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.497 [2024-07-24 20:08:06.338138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.497 [2024-07-24 20:08:06.348927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:18.497 [2024-07-24 20:08:06.349904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.497 [2024-07-24 20:08:06.349919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.497 [2024-07-24 20:08:06.360659] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:18.497 [2024-07-24 20:08:06.361625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.497 [2024-07-24 20:08:06.361641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.497 [2024-07-24 20:08:06.372404] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:18.497 [2024-07-24 20:08:06.373371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.497 [2024-07-24 20:08:06.373387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.497 [2024-07-24 20:08:06.384134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:18.497 [2024-07-24 20:08:06.385103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.497 [2024-07-24 20:08:06.385120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.497 [2024-07-24 20:08:06.395886] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:18.497 [2024-07-24 20:08:06.396859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.497 [2024-07-24 20:08:06.396875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.497 [2024-07-24 20:08:06.407625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:18.497 [2024-07-24 20:08:06.408573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:14999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.497 [2024-07-24 20:08:06.408589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.497 [2024-07-24 20:08:06.419352] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:18.497 [2024-07-24 20:08:06.420282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.497 [2024-07-24 20:08:06.420298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.497 [2024-07-24 20:08:06.431080] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:18.497 [2024-07-24 20:08:06.432056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.497 [2024-07-24 20:08:06.432073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.497 [2024-07-24 20:08:06.442837] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:18.497 [2024-07-24 20:08:06.443808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.497 [2024-07-24 20:08:06.443826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.758 [2024-07-24 20:08:06.454607] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:18.758 [2024-07-24 20:08:06.455591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.758 [2024-07-24 20:08:06.455607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.758 [2024-07-24 20:08:06.466349] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:18.758 [2024-07-24 20:08:06.467309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.758 [2024-07-24 20:08:06.467325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.758 [2024-07-24 20:08:06.478069] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:18.758 [2024-07-24 20:08:06.479044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:10147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.758 [2024-07-24 20:08:06.479061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.758 [2024-07-24 20:08:06.489808] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:18.758 [2024-07-24 20:08:06.490776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.758 [2024-07-24 20:08:06.490792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.758 [2024-07-24 20:08:06.501543] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:18.758 [2024-07-24 20:08:06.502521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.758 [2024-07-24 20:08:06.502536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.758 [2024-07-24 20:08:06.513279] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:18.758 [2024-07-24 20:08:06.514241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.758 [2024-07-24 20:08:06.514256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.758 [2024-07-24 20:08:06.525000] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:18.758 [2024-07-24 20:08:06.525963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.758 [2024-07-24 20:08:06.525978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.758 [2024-07-24 20:08:06.536745] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:18.758 [2024-07-24 20:08:06.537721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.758 [2024-07-24 20:08:06.537736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.759 [2024-07-24 20:08:06.548502] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:18.759 [2024-07-24 20:08:06.549446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.759 [2024-07-24 20:08:06.549461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.759 [2024-07-24 20:08:06.560222] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:18.759 [2024-07-24 20:08:06.561183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.759 [2024-07-24 20:08:06.561199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.759 [2024-07-24 20:08:06.571952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:18.759 [2024-07-24 20:08:06.572915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.759 [2024-07-24 20:08:06.572931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.759 [2024-07-24 20:08:06.583669] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:18.759 [2024-07-24 20:08:06.584637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.759 [2024-07-24 20:08:06.584652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.759 [2024-07-24 20:08:06.595401] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:18.759 [2024-07-24 20:08:06.596366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.759 [2024-07-24 20:08:06.596382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.759 [2024-07-24 20:08:06.607147] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:18.759 [2024-07-24 20:08:06.608116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.759 [2024-07-24 20:08:06.608132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.759 [2024-07-24 20:08:06.618856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:18.759 [2024-07-24 20:08:06.619825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:17433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.759 [2024-07-24 20:08:06.619840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.759 [2024-07-24 20:08:06.630595] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:18.759 [2024-07-24 20:08:06.631549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.759 [2024-07-24 20:08:06.631564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.759 [2024-07-24 20:08:06.642328] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:18.759 [2024-07-24 20:08:06.643296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.759 [2024-07-24 20:08:06.643311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.759 [2024-07-24 20:08:06.654060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:18.759 [2024-07-24 20:08:06.655034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.759 [2024-07-24 20:08:06.655049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.759 [2024-07-24 20:08:06.665799] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:18.759 [2024-07-24 20:08:06.666766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.759 [2024-07-24 20:08:06.666781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.759 [2024-07-24 20:08:06.677534] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:18.759 [2024-07-24 20:08:06.678469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.759 [2024-07-24 20:08:06.678485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.759 [2024-07-24 20:08:06.689261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:18.759 [2024-07-24 20:08:06.690230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.759 [2024-07-24 20:08:06.690245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:18.759 [2024-07-24 20:08:06.701007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:18.759 [2024-07-24 20:08:06.701984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.759 [2024-07-24 20:08:06.702000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.020 [2024-07-24 20:08:06.712771] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:19.020 [2024-07-24 20:08:06.713704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.020 [2024-07-24 20:08:06.713719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.020 [2024-07-24 20:08:06.724506] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:19.020 [2024-07-24 20:08:06.725475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.020 [2024-07-24 20:08:06.725491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.021 [2024-07-24 20:08:06.736243] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:19.021 [2024-07-24 20:08:06.737212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.021 [2024-07-24 20:08:06.737228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.021 [2024-07-24 20:08:06.747968] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:19.021 [2024-07-24 20:08:06.748938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:9831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.021 [2024-07-24 20:08:06.748956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.021 [2024-07-24 20:08:06.759696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:19.021 [2024-07-24 20:08:06.760681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.021 [2024-07-24 20:08:06.760697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.021 [2024-07-24 20:08:06.771472] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:19.021 [2024-07-24 20:08:06.772402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.021 [2024-07-24 20:08:06.772421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.021 [2024-07-24 20:08:06.783213] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:19.021 [2024-07-24 20:08:06.784183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.021 [2024-07-24 20:08:06.784199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.021 [2024-07-24 20:08:06.794960] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:19.021 [2024-07-24 20:08:06.795934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.021 [2024-07-24 20:08:06.795950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.021 [2024-07-24 20:08:06.806708] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:19.021 [2024-07-24 20:08:06.807649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.021 [2024-07-24 20:08:06.807665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.021 [2024-07-24 20:08:06.818444] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:19.021 [2024-07-24 20:08:06.819386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.021 [2024-07-24 20:08:06.819402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.021 [2024-07-24 20:08:06.830171] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:19.021 [2024-07-24 20:08:06.831146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.021 [2024-07-24 20:08:06.831162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.021 [2024-07-24 20:08:06.841923] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:19.021 [2024-07-24 20:08:06.842864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.021 [2024-07-24 20:08:06.842880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.021 [2024-07-24 20:08:06.853668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:19.021 [2024-07-24 20:08:06.854640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.021 [2024-07-24 20:08:06.854655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.021 [2024-07-24 20:08:06.865398] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:19.021 [2024-07-24 20:08:06.866362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.021 [2024-07-24 20:08:06.866377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.021 [2024-07-24 20:08:06.877120] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:19.021 [2024-07-24 20:08:06.878089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.021 [2024-07-24 20:08:06.878105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.021 [2024-07-24 20:08:06.888863] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:19.021 [2024-07-24 20:08:06.889845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.021 [2024-07-24 20:08:06.889861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.021 [2024-07-24 20:08:06.900623] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:19.021 [2024-07-24 20:08:06.901588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.021 [2024-07-24 20:08:06.901604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.021 [2024-07-24 20:08:06.912358] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:19.021 [2024-07-24 20:08:06.913322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.021 [2024-07-24 20:08:06.913338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.021 [2024-07-24 20:08:06.924086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:19.021 [2024-07-24 20:08:06.925058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.021 [2024-07-24 20:08:06.925073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.021 [2024-07-24 20:08:06.935875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:19.021 [2024-07-24 20:08:06.936844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.021 [2024-07-24 20:08:06.936860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.021 [2024-07-24 20:08:06.947606] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:19.021 [2024-07-24 20:08:06.948554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.021 [2024-07-24 20:08:06.948570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.021 [2024-07-24 20:08:06.959348] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:19.021 [2024-07-24 20:08:06.960316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.021 [2024-07-24 20:08:06.960332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.021 [2024-07-24 20:08:06.971084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:19.021 [2024-07-24 20:08:06.972015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.021 [2024-07-24 20:08:06.972031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.283 [2024-07-24 20:08:06.982844] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:19.283 [2024-07-24 20:08:06.983819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.283 [2024-07-24 20:08:06.983835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.283 [2024-07-24 20:08:06.994678] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:19.283 [2024-07-24 20:08:06.995619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.283 [2024-07-24 20:08:06.995635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.283 [2024-07-24 20:08:07.006424] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:19.283 [2024-07-24 20:08:07.007381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.283 [2024-07-24 20:08:07.007398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.283 [2024-07-24 20:08:07.018171] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:19.283 [2024-07-24 20:08:07.019143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.283 [2024-07-24 20:08:07.019159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.283 [2024-07-24 20:08:07.029929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:19.283 [2024-07-24 20:08:07.030908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.283 [2024-07-24 20:08:07.030923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.283 [2024-07-24 20:08:07.041686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:19.283 [2024-07-24 20:08:07.042654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.283 [2024-07-24 20:08:07.042670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.283 [2024-07-24 20:08:07.053427] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:19.283 [2024-07-24 20:08:07.054387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.283 [2024-07-24 20:08:07.054405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.283 [2024-07-24 20:08:07.065167] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:19.283 [2024-07-24 20:08:07.066099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.283 [2024-07-24 20:08:07.066114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.283 [2024-07-24 20:08:07.076897] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:19.283 [2024-07-24 20:08:07.077866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.283 [2024-07-24 20:08:07.077882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.283 [2024-07-24 20:08:07.088649] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:19.283 [2024-07-24 20:08:07.089622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.283 [2024-07-24 20:08:07.089637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.283 [2024-07-24 20:08:07.100399] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:19.283 [2024-07-24 20:08:07.101367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.283 [2024-07-24 20:08:07.101384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.283 [2024-07-24 20:08:07.112141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:19.283 [2024-07-24 20:08:07.113112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.283 [2024-07-24 20:08:07.113128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.283 [2024-07-24 20:08:07.123901] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:19.283 [2024-07-24 20:08:07.124867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.283 [2024-07-24 20:08:07.124883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.283 [2024-07-24 20:08:07.135640] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:19.283 [2024-07-24 20:08:07.136620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.283 [2024-07-24 20:08:07.136636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.283 [2024-07-24 20:08:07.147391] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:19.283 [2024-07-24 20:08:07.148350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.283 [2024-07-24 20:08:07.148365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.283 [2024-07-24 20:08:07.159137] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:19.283 [2024-07-24 20:08:07.160119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.283 [2024-07-24 20:08:07.160135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.283 [2024-07-24 20:08:07.170913] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:19.283 [2024-07-24 20:08:07.171886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.283 [2024-07-24 20:08:07.171902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.283 [2024-07-24 20:08:07.182659] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:19.283 [2024-07-24 20:08:07.183670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.283 [2024-07-24 20:08:07.183686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.283 [2024-07-24 20:08:07.194441] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:19.283 [2024-07-24 20:08:07.195384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.283 [2024-07-24 20:08:07.195400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.283 [2024-07-24 20:08:07.206182] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:19.283 [2024-07-24 20:08:07.207179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.283 [2024-07-24 20:08:07.207195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.283 [2024-07-24 20:08:07.218124] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:19.283 [2024-07-24 20:08:07.219094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.283 [2024-07-24 20:08:07.219111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.283 [2024-07-24 20:08:07.229875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:19.283 [2024-07-24 20:08:07.230822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.283 [2024-07-24 20:08:07.230838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.545 [2024-07-24 20:08:07.241623] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:19.545 [2024-07-24 20:08:07.242574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.545 [2024-07-24 20:08:07.242589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.545 [2024-07-24 20:08:07.253371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:19.545 [2024-07-24 20:08:07.254328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.545 [2024-07-24 20:08:07.254343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.545 [2024-07-24 20:08:07.265095] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:19.545 [2024-07-24 20:08:07.266062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.545 [2024-07-24 20:08:07.266078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.545 [2024-07-24 20:08:07.276841] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:19.545 [2024-07-24 20:08:07.277812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.545 [2024-07-24 20:08:07.277827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.545 [2024-07-24 20:08:07.288595] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:19.545 [2024-07-24 20:08:07.289558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.545 [2024-07-24 20:08:07.289574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.545 [2024-07-24 20:08:07.300352] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:19.545 [2024-07-24 20:08:07.301317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.545 [2024-07-24 20:08:07.301333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.545 [2024-07-24 20:08:07.312105] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:19.545 [2024-07-24 20:08:07.313076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.545 [2024-07-24 20:08:07.313092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.545 [2024-07-24 20:08:07.323841] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:19.545 [2024-07-24 20:08:07.324809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.545 [2024-07-24 20:08:07.324825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.545 [2024-07-24 20:08:07.335597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:19.545 [2024-07-24 20:08:07.336575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.545 [2024-07-24 20:08:07.336591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.545 [2024-07-24 20:08:07.347334] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:19.545 [2024-07-24 20:08:07.348300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.545 [2024-07-24 20:08:07.348316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.545 [2024-07-24 20:08:07.359090] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:19.545 [2024-07-24 20:08:07.360021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.545 [2024-07-24 20:08:07.360040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.545 [2024-07-24 20:08:07.370830] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:19.545 [2024-07-24 20:08:07.371761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.545 [2024-07-24 20:08:07.371777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.545 [2024-07-24 20:08:07.382588] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:19.545 [2024-07-24 20:08:07.383554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.545 [2024-07-24 20:08:07.383569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.545 [2024-07-24 20:08:07.394346] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:19.545 [2024-07-24 20:08:07.395312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.545 [2024-07-24 20:08:07.395328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.545 [2024-07-24 20:08:07.406091] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:19.545 [2024-07-24 20:08:07.407056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.545 [2024-07-24 20:08:07.407072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.545 [2024-07-24 20:08:07.417828] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:19.545 [2024-07-24 20:08:07.418800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.545 [2024-07-24 20:08:07.418816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.545 [2024-07-24 20:08:07.429574] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:19.545 [2024-07-24 20:08:07.430524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.545 [2024-07-24 20:08:07.430540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.545 [2024-07-24 20:08:07.441307] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:19.545 [2024-07-24 20:08:07.442268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.545 [2024-07-24 20:08:07.442284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.545 [2024-07-24 20:08:07.453061] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:19.545 [2024-07-24 20:08:07.454033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.545 [2024-07-24 20:08:07.454049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.545 [2024-07-24 20:08:07.464792] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:19.545 [2024-07-24 20:08:07.465724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.545 [2024-07-24 20:08:07.465740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.545 [2024-07-24 20:08:07.476531] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:19.545 [2024-07-24 20:08:07.477495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.545 [2024-07-24 20:08:07.477511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.545 [2024-07-24 20:08:07.488271] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:19.545 [2024-07-24 20:08:07.489204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.545 [2024-07-24 20:08:07.489219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.807 [2024-07-24 20:08:07.500019] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:19.807 [2024-07-24 20:08:07.500991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.807 [2024-07-24 20:08:07.501007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.807 [2024-07-24 20:08:07.511787] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:19.807 [2024-07-24 20:08:07.512751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.807 [2024-07-24 20:08:07.512766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.807 [2024-07-24 20:08:07.523544] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:19.807 [2024-07-24 20:08:07.524482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.807 [2024-07-24 20:08:07.524497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.807 [2024-07-24 20:08:07.535298] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:19.807 [2024-07-24 20:08:07.536273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.807 [2024-07-24 20:08:07.536289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.807 [2024-07-24 20:08:07.547052] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:19.807 [2024-07-24 20:08:07.548023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.807 [2024-07-24 20:08:07.548038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.807 [2024-07-24 20:08:07.558821] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:19.807 [2024-07-24 20:08:07.559792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.807 [2024-07-24 20:08:07.559807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.807 [2024-07-24 20:08:07.570560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:19.807 [2024-07-24 20:08:07.571512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.807 [2024-07-24 20:08:07.571527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.807 [2024-07-24 20:08:07.582274] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:19.807 [2024-07-24 20:08:07.583234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.807 [2024-07-24 20:08:07.583249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.807 [2024-07-24 20:08:07.594021] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:19.807 [2024-07-24 20:08:07.595003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.807 [2024-07-24 20:08:07.595018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.807 [2024-07-24 20:08:07.605753] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:19.807 [2024-07-24 20:08:07.606721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.807 [2024-07-24 20:08:07.606737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.807 [2024-07-24 20:08:07.617510] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:19.807 [2024-07-24 20:08:07.618472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.807 [2024-07-24 20:08:07.618487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.807 [2024-07-24 20:08:07.629240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:19.807 [2024-07-24 20:08:07.630208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.807 [2024-07-24 20:08:07.630224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.807 [2024-07-24 20:08:07.640967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:19.807 [2024-07-24 20:08:07.641939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.807 [2024-07-24 20:08:07.641955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.807 [2024-07-24 20:08:07.652682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:19.807 [2024-07-24 20:08:07.653654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.807 [2024-07-24 20:08:07.653669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.807 [2024-07-24 20:08:07.664430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:19.807 [2024-07-24 20:08:07.665381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.807 [2024-07-24 20:08:07.665400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.807 [2024-07-24 20:08:07.676147] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:19.807 [2024-07-24 20:08:07.677115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.807 [2024-07-24 20:08:07.677130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.807 [2024-07-24 20:08:07.687871] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:19.807 [2024-07-24 20:08:07.688838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.807 [2024-07-24 20:08:07.688854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.807 [2024-07-24 20:08:07.699595] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:19.807 [2024-07-24 20:08:07.700566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.807 [2024-07-24 20:08:07.700581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.808 [2024-07-24 20:08:07.711316] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:19.808 [2024-07-24 20:08:07.712242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.808 [2024-07-24 20:08:07.712257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.808 [2024-07-24 20:08:07.723039] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:19.808 [2024-07-24 20:08:07.724010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.808 [2024-07-24 20:08:07.724027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.808 [2024-07-24 20:08:07.734766] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:19.808 [2024-07-24 20:08:07.735738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.808 [2024-07-24 20:08:07.735754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.808 [2024-07-24 20:08:07.746513] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:19.808 [2024-07-24 20:08:07.747445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.808 [2024-07-24 20:08:07.747460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:19.808 [2024-07-24 20:08:07.758264] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:19.808 [2024-07-24 20:08:07.759211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:19.808 [2024-07-24 20:08:07.759226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:20.069 [2024-07-24 20:08:07.769999] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:20.069 [2024-07-24 20:08:07.770974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.069 [2024-07-24 20:08:07.770990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:20.069 [2024-07-24 20:08:07.781733] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:20.069 [2024-07-24 20:08:07.782703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.069 [2024-07-24 20:08:07.782719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:20.069 [2024-07-24 20:08:07.793485] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:20.069 [2024-07-24 20:08:07.794453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.069 [2024-07-24 20:08:07.794468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:20.069 [2024-07-24 20:08:07.805235] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:20.069 [2024-07-24 20:08:07.806203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.069 [2024-07-24 20:08:07.806218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:20.069 [2024-07-24 20:08:07.816972] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:20.069 [2024-07-24 20:08:07.817941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.069 [2024-07-24 20:08:07.817957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:20.069 [2024-07-24 20:08:07.828765] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:20.069 [2024-07-24 20:08:07.829734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.069 [2024-07-24 20:08:07.829749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:20.069 [2024-07-24 20:08:07.840479] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:20.069 [2024-07-24 20:08:07.841444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.069 [2024-07-24 20:08:07.841459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:20.069 [2024-07-24 20:08:07.852215] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:20.069 [2024-07-24 20:08:07.853177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.069 [2024-07-24 20:08:07.853193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:20.069 [2024-07-24 20:08:07.863960] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:20.069 [2024-07-24 20:08:07.864927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.069 [2024-07-24 20:08:07.864942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:20.069 [2024-07-24 20:08:07.875709] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:20.069 [2024-07-24 20:08:07.876673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.069 [2024-07-24 20:08:07.876689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:20.069 [2024-07-24 20:08:07.887423] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:20.069 [2024-07-24 20:08:07.888383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.069 [2024-07-24 20:08:07.888399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:20.069 [2024-07-24 20:08:07.899155] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f7da8 00:28:20.069 [2024-07-24 20:08:07.900124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.069 [2024-07-24 20:08:07.900139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:20.069 [2024-07-24 20:08:07.910893] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190eb760 00:28:20.069 [2024-07-24 20:08:07.911863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.069 [2024-07-24 20:08:07.911878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:20.069 [2024-07-24 20:08:07.922639] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17540c0) with pdu=0x2000190f92c0 00:28:20.069 [2024-07-24 20:08:07.923612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.069 [2024-07-24 20:08:07.923628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:20.069 00:28:20.069 Latency(us) 00:28:20.069 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.069 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:20.069 nvme0n1 : 2.01 21646.72 84.56 0.00 0.00 5905.47 5297.49 16711.68 00:28:20.069 =================================================================================================================== 00:28:20.069 Total : 21646.72 84.56 0.00 0.00 5905.47 5297.49 16711.68 00:28:20.069 0 00:28:20.069 20:08:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:20.069 20:08:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:20.069 20:08:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:20.069 | .driver_specific 00:28:20.069 | .nvme_error 00:28:20.069 | .status_code 00:28:20.069 | .command_transient_transport_error' 00:28:20.069 20:08:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:20.330 20:08:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 170 > 0 )) 00:28:20.330 20:08:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3848342 00:28:20.330 20:08:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3848342 ']' 00:28:20.330 20:08:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3848342 00:28:20.330 20:08:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:20.330 20:08:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:20.330 20:08:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3848342 00:28:20.330 20:08:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:20.330 20:08:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:20.330 20:08:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3848342' 00:28:20.330 killing process with pid 3848342 00:28:20.330 20:08:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3848342 00:28:20.330 Received shutdown signal, test time was about 2.000000 seconds 00:28:20.330 00:28:20.330 Latency(us) 00:28:20.330 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.330 =================================================================================================================== 00:28:20.330 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:20.330 20:08:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3848342 00:28:20.591 20:08:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:20.591 20:08:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:20.591 20:08:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:20.591 20:08:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:20.591 20:08:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:20.591 20:08:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3849028 00:28:20.591 20:08:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3849028 /var/tmp/bperf.sock 00:28:20.591 20:08:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3849028 ']' 00:28:20.591 20:08:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:20.591 20:08:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:20.591 20:08:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:20.591 20:08:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:20.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:20.591 20:08:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:20.591 20:08:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:20.591 [2024-07-24 20:08:08.351033] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:28:20.591 [2024-07-24 20:08:08.351091] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3849028 ] 00:28:20.591 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:20.591 Zero copy mechanism will not be used. 00:28:20.591 EAL: No free 2048 kB hugepages reported on node 1 00:28:20.591 [2024-07-24 20:08:08.426428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.591 [2024-07-24 20:08:08.478634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:21.160 20:08:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:21.160 20:08:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:21.160 20:08:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:21.160 20:08:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:21.423 20:08:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:21.423 20:08:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.423 20:08:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:21.423 20:08:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.423 20:08:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:21.423 20:08:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:21.729 nvme0n1 00:28:21.989 20:08:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:21.989 20:08:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.989 20:08:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:21.989 20:08:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.989 20:08:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:21.989 20:08:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:21.989 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:21.989 Zero copy mechanism will not be used. 00:28:21.989 Running I/O for 2 seconds... 00:28:21.989 [2024-07-24 20:08:09.793765] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:21.989 [2024-07-24 20:08:09.794286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.989 [2024-07-24 20:08:09.794314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.989 [2024-07-24 20:08:09.810256] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:21.989 [2024-07-24 20:08:09.810591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.989 [2024-07-24 20:08:09.810610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.989 [2024-07-24 20:08:09.823216] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:21.989 [2024-07-24 20:08:09.823594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.989 [2024-07-24 20:08:09.823612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.989 [2024-07-24 20:08:09.835855] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:21.989 [2024-07-24 20:08:09.836040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.989 [2024-07-24 20:08:09.836060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.989 [2024-07-24 20:08:09.848196] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:21.989 [2024-07-24 20:08:09.848381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.989 [2024-07-24 20:08:09.848397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.989 [2024-07-24 20:08:09.859926] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:21.989 [2024-07-24 20:08:09.860078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.989 [2024-07-24 20:08:09.860094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.989 [2024-07-24 20:08:09.873594] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:21.989 [2024-07-24 20:08:09.873838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.989 [2024-07-24 20:08:09.873856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.989 [2024-07-24 20:08:09.885430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:21.989 [2024-07-24 20:08:09.885671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.989 [2024-07-24 20:08:09.885689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.989 [2024-07-24 20:08:09.897792] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:21.989 [2024-07-24 20:08:09.898152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.989 [2024-07-24 20:08:09.898170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.989 [2024-07-24 20:08:09.909093] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:21.989 [2024-07-24 20:08:09.909459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.989 [2024-07-24 20:08:09.909476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.989 [2024-07-24 20:08:09.921033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:21.989 [2024-07-24 20:08:09.921378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.989 [2024-07-24 20:08:09.921395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.989 [2024-07-24 20:08:09.931879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:21.989 [2024-07-24 20:08:09.932061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.989 [2024-07-24 20:08:09.932076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.250 [2024-07-24 20:08:09.943326] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.250 [2024-07-24 20:08:09.943578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.250 [2024-07-24 20:08:09.943595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.250 [2024-07-24 20:08:09.954665] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.250 [2024-07-24 20:08:09.954815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.250 [2024-07-24 20:08:09.954830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.250 [2024-07-24 20:08:09.965899] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.250 [2024-07-24 20:08:09.966234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.251 [2024-07-24 20:08:09.966250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.251 [2024-07-24 20:08:09.978227] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.251 [2024-07-24 20:08:09.978516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.251 [2024-07-24 20:08:09.978533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.251 [2024-07-24 20:08:09.990131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.251 [2024-07-24 20:08:09.990323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.251 [2024-07-24 20:08:09.990338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.251 [2024-07-24 20:08:10.002034] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.251 [2024-07-24 20:08:10.002392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.251 [2024-07-24 20:08:10.002409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.251 [2024-07-24 20:08:10.014316] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.251 [2024-07-24 20:08:10.014686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.251 [2024-07-24 20:08:10.014704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.251 [2024-07-24 20:08:10.027704] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.251 [2024-07-24 20:08:10.028040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.251 [2024-07-24 20:08:10.028058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.251 [2024-07-24 20:08:10.040616] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.251 [2024-07-24 20:08:10.040954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.251 [2024-07-24 20:08:10.040971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.251 [2024-07-24 20:08:10.052921] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.251 [2024-07-24 20:08:10.053161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.251 [2024-07-24 20:08:10.053177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.251 [2024-07-24 20:08:10.064311] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.251 [2024-07-24 20:08:10.064664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.251 [2024-07-24 20:08:10.064681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.251 [2024-07-24 20:08:10.076290] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.251 [2024-07-24 20:08:10.076532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.251 [2024-07-24 20:08:10.076549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.251 [2024-07-24 20:08:10.088307] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.251 [2024-07-24 20:08:10.088662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.251 [2024-07-24 20:08:10.088683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.251 [2024-07-24 20:08:10.101287] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.251 [2024-07-24 20:08:10.101627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.251 [2024-07-24 20:08:10.101645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.251 [2024-07-24 20:08:10.113570] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.251 [2024-07-24 20:08:10.113956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.251 [2024-07-24 20:08:10.113973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.251 [2024-07-24 20:08:10.126130] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.251 [2024-07-24 20:08:10.126458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.251 [2024-07-24 20:08:10.126475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.251 [2024-07-24 20:08:10.138334] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.251 [2024-07-24 20:08:10.138709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.251 [2024-07-24 20:08:10.138725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.251 [2024-07-24 20:08:10.149267] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.251 [2024-07-24 20:08:10.149665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.251 [2024-07-24 20:08:10.149686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.251 [2024-07-24 20:08:10.161074] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.251 [2024-07-24 20:08:10.161501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.251 [2024-07-24 20:08:10.161519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.251 [2024-07-24 20:08:10.173506] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.251 [2024-07-24 20:08:10.173920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.251 [2024-07-24 20:08:10.173937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.251 [2024-07-24 20:08:10.185480] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.251 [2024-07-24 20:08:10.185814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.251 [2024-07-24 20:08:10.185831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.251 [2024-07-24 20:08:10.197534] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.251 [2024-07-24 20:08:10.197908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.251 [2024-07-24 20:08:10.197925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.512 [2024-07-24 20:08:10.208978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.512 [2024-07-24 20:08:10.209322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.512 [2024-07-24 20:08:10.209339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.512 [2024-07-24 20:08:10.219760] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.512 [2024-07-24 20:08:10.220130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.512 [2024-07-24 20:08:10.220146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.512 [2024-07-24 20:08:10.230991] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.512 [2024-07-24 20:08:10.231142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.512 [2024-07-24 20:08:10.231157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.512 [2024-07-24 20:08:10.243120] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.512 [2024-07-24 20:08:10.243309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.512 [2024-07-24 20:08:10.243324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.512 [2024-07-24 20:08:10.255884] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.512 [2024-07-24 20:08:10.256108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.512 [2024-07-24 20:08:10.256122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.512 [2024-07-24 20:08:10.269174] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.512 [2024-07-24 20:08:10.269520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.512 [2024-07-24 20:08:10.269538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.512 [2024-07-24 20:08:10.281247] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.512 [2024-07-24 20:08:10.281608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.512 [2024-07-24 20:08:10.281624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.512 [2024-07-24 20:08:10.293082] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.512 [2024-07-24 20:08:10.293434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.512 [2024-07-24 20:08:10.293451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.512 [2024-07-24 20:08:10.305433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.512 [2024-07-24 20:08:10.305685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.512 [2024-07-24 20:08:10.305702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.512 [2024-07-24 20:08:10.318145] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.513 [2024-07-24 20:08:10.318510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.513 [2024-07-24 20:08:10.318527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.513 [2024-07-24 20:08:10.332320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.513 [2024-07-24 20:08:10.332744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.513 [2024-07-24 20:08:10.332761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.513 [2024-07-24 20:08:10.345328] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.513 [2024-07-24 20:08:10.345681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.513 [2024-07-24 20:08:10.345698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.513 [2024-07-24 20:08:10.358298] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.513 [2024-07-24 20:08:10.358663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.513 [2024-07-24 20:08:10.358683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.513 [2024-07-24 20:08:10.371696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.513 [2024-07-24 20:08:10.372029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.513 [2024-07-24 20:08:10.372046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.513 [2024-07-24 20:08:10.385074] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.513 [2024-07-24 20:08:10.385417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.513 [2024-07-24 20:08:10.385433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.513 [2024-07-24 20:08:10.398619] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.513 [2024-07-24 20:08:10.398934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.513 [2024-07-24 20:08:10.398951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.513 [2024-07-24 20:08:10.411835] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.513 [2024-07-24 20:08:10.412195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.513 [2024-07-24 20:08:10.412216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.513 [2024-07-24 20:08:10.425525] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.513 [2024-07-24 20:08:10.425890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.513 [2024-07-24 20:08:10.425907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.513 [2024-07-24 20:08:10.438207] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.513 [2024-07-24 20:08:10.438643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.513 [2024-07-24 20:08:10.438659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.513 [2024-07-24 20:08:10.450847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.513 [2024-07-24 20:08:10.451089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.513 [2024-07-24 20:08:10.451105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.513 [2024-07-24 20:08:10.463264] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.513 [2024-07-24 20:08:10.463578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.513 [2024-07-24 20:08:10.463595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.774 [2024-07-24 20:08:10.475798] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.774 [2024-07-24 20:08:10.476186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.774 [2024-07-24 20:08:10.476207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.774 [2024-07-24 20:08:10.488208] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.774 [2024-07-24 20:08:10.488461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.774 [2024-07-24 20:08:10.488478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.774 [2024-07-24 20:08:10.500735] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.774 [2024-07-24 20:08:10.500975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.774 [2024-07-24 20:08:10.500991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.774 [2024-07-24 20:08:10.510651] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.774 [2024-07-24 20:08:10.510890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.774 [2024-07-24 20:08:10.510907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.774 [2024-07-24 20:08:10.521764] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.774 [2024-07-24 20:08:10.522005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.774 [2024-07-24 20:08:10.522021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.774 [2024-07-24 20:08:10.534048] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.774 [2024-07-24 20:08:10.534399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.774 [2024-07-24 20:08:10.534416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.774 [2024-07-24 20:08:10.546058] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.774 [2024-07-24 20:08:10.546305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.774 [2024-07-24 20:08:10.546322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.774 [2024-07-24 20:08:10.559066] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.774 [2024-07-24 20:08:10.559403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.774 [2024-07-24 20:08:10.559421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.774 [2024-07-24 20:08:10.571861] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.774 [2024-07-24 20:08:10.572193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.774 [2024-07-24 20:08:10.572214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.774 [2024-07-24 20:08:10.584429] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.774 [2024-07-24 20:08:10.584744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.774 [2024-07-24 20:08:10.584761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.774 [2024-07-24 20:08:10.596365] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.774 [2024-07-24 20:08:10.596689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.774 [2024-07-24 20:08:10.596705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.774 [2024-07-24 20:08:10.609244] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.774 [2024-07-24 20:08:10.609608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.774 [2024-07-24 20:08:10.609625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.774 [2024-07-24 20:08:10.621828] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.774 [2024-07-24 20:08:10.622140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.774 [2024-07-24 20:08:10.622157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.774 [2024-07-24 20:08:10.633982] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.774 [2024-07-24 20:08:10.634322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.774 [2024-07-24 20:08:10.634338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.774 [2024-07-24 20:08:10.645514] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.774 [2024-07-24 20:08:10.645848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.774 [2024-07-24 20:08:10.645864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.774 [2024-07-24 20:08:10.658234] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.774 [2024-07-24 20:08:10.658586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.774 [2024-07-24 20:08:10.658603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.774 [2024-07-24 20:08:10.669988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.774 [2024-07-24 20:08:10.670364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.774 [2024-07-24 20:08:10.670380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.774 [2024-07-24 20:08:10.681509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.774 [2024-07-24 20:08:10.681846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.774 [2024-07-24 20:08:10.681865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.774 [2024-07-24 20:08:10.693063] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.774 [2024-07-24 20:08:10.693415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.774 [2024-07-24 20:08:10.693432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.774 [2024-07-24 20:08:10.704589] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.774 [2024-07-24 20:08:10.704927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.774 [2024-07-24 20:08:10.704944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.774 [2024-07-24 20:08:10.716995] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:22.774 [2024-07-24 20:08:10.717240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.774 [2024-07-24 20:08:10.717257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.037 [2024-07-24 20:08:10.729740] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.037 [2024-07-24 20:08:10.730074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.037 [2024-07-24 20:08:10.730091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.037 [2024-07-24 20:08:10.742566] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.037 [2024-07-24 20:08:10.742806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.037 [2024-07-24 20:08:10.742832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.037 [2024-07-24 20:08:10.755336] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.037 [2024-07-24 20:08:10.755652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.037 [2024-07-24 20:08:10.755669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.037 [2024-07-24 20:08:10.767332] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.037 [2024-07-24 20:08:10.767728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.037 [2024-07-24 20:08:10.767745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.037 [2024-07-24 20:08:10.779086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.037 [2024-07-24 20:08:10.779330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.037 [2024-07-24 20:08:10.779347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.037 [2024-07-24 20:08:10.790778] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.037 [2024-07-24 20:08:10.791131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.037 [2024-07-24 20:08:10.791148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.037 [2024-07-24 20:08:10.802422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.037 [2024-07-24 20:08:10.802736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.037 [2024-07-24 20:08:10.802751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.037 [2024-07-24 20:08:10.814276] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.037 [2024-07-24 20:08:10.814622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.037 [2024-07-24 20:08:10.814639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.037 [2024-07-24 20:08:10.825487] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.037 [2024-07-24 20:08:10.825798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.037 [2024-07-24 20:08:10.825814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.037 [2024-07-24 20:08:10.836743] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.038 [2024-07-24 20:08:10.837079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.038 [2024-07-24 20:08:10.837096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.038 [2024-07-24 20:08:10.848480] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.038 [2024-07-24 20:08:10.848816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.038 [2024-07-24 20:08:10.848834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.038 [2024-07-24 20:08:10.860296] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.038 [2024-07-24 20:08:10.860741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.038 [2024-07-24 20:08:10.860759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.038 [2024-07-24 20:08:10.873627] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.038 [2024-07-24 20:08:10.873969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.038 [2024-07-24 20:08:10.873986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.038 [2024-07-24 20:08:10.885741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.038 [2024-07-24 20:08:10.886069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.038 [2024-07-24 20:08:10.886085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.038 [2024-07-24 20:08:10.898539] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.038 [2024-07-24 20:08:10.898905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.038 [2024-07-24 20:08:10.898922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.038 [2024-07-24 20:08:10.911212] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.038 [2024-07-24 20:08:10.911530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.038 [2024-07-24 20:08:10.911546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.038 [2024-07-24 20:08:10.923346] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.038 [2024-07-24 20:08:10.923682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.038 [2024-07-24 20:08:10.923698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.038 [2024-07-24 20:08:10.936721] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.038 [2024-07-24 20:08:10.936969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.038 [2024-07-24 20:08:10.936986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.038 [2024-07-24 20:08:10.949270] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.038 [2024-07-24 20:08:10.949415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.038 [2024-07-24 20:08:10.949430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.038 [2024-07-24 20:08:10.961846] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.038 [2024-07-24 20:08:10.962088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.038 [2024-07-24 20:08:10.962106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.038 [2024-07-24 20:08:10.974239] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.038 [2024-07-24 20:08:10.974598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.038 [2024-07-24 20:08:10.974615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.038 [2024-07-24 20:08:10.987196] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.038 [2024-07-24 20:08:10.987557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.038 [2024-07-24 20:08:10.987574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.299 [2024-07-24 20:08:11.000113] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.299 [2024-07-24 20:08:11.000330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.299 [2024-07-24 20:08:11.000348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.299 [2024-07-24 20:08:11.012350] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.299 [2024-07-24 20:08:11.012669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.299 [2024-07-24 20:08:11.012686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.299 [2024-07-24 20:08:11.024910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.299 [2024-07-24 20:08:11.025150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.299 [2024-07-24 20:08:11.025166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.299 [2024-07-24 20:08:11.037069] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.299 [2024-07-24 20:08:11.037314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.299 [2024-07-24 20:08:11.037331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.299 [2024-07-24 20:08:11.049206] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.299 [2024-07-24 20:08:11.049541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.299 [2024-07-24 20:08:11.049557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.299 [2024-07-24 20:08:11.061270] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.299 [2024-07-24 20:08:11.061610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.299 [2024-07-24 20:08:11.061626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.299 [2024-07-24 20:08:11.072785] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.299 [2024-07-24 20:08:11.072979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.299 [2024-07-24 20:08:11.072994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.299 [2024-07-24 20:08:11.085293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.299 [2024-07-24 20:08:11.085697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.299 [2024-07-24 20:08:11.085714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.299 [2024-07-24 20:08:11.097817] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.299 [2024-07-24 20:08:11.098088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.299 [2024-07-24 20:08:11.098104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.299 [2024-07-24 20:08:11.110410] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.299 [2024-07-24 20:08:11.110679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.299 [2024-07-24 20:08:11.110696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.299 [2024-07-24 20:08:11.122653] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.300 [2024-07-24 20:08:11.122999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.300 [2024-07-24 20:08:11.123015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.300 [2024-07-24 20:08:11.134719] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.300 [2024-07-24 20:08:11.135104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.300 [2024-07-24 20:08:11.135121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.300 [2024-07-24 20:08:11.147318] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.300 [2024-07-24 20:08:11.147681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.300 [2024-07-24 20:08:11.147698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.300 [2024-07-24 20:08:11.159405] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.300 [2024-07-24 20:08:11.159759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.300 [2024-07-24 20:08:11.159775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.300 [2024-07-24 20:08:11.172040] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.300 [2024-07-24 20:08:11.172364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.300 [2024-07-24 20:08:11.172380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.300 [2024-07-24 20:08:11.184698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.300 [2024-07-24 20:08:11.184849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.300 [2024-07-24 20:08:11.184864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.300 [2024-07-24 20:08:11.198103] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.300 [2024-07-24 20:08:11.198385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.300 [2024-07-24 20:08:11.198402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.300 [2024-07-24 20:08:11.210139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.300 [2024-07-24 20:08:11.210372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.300 [2024-07-24 20:08:11.210388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.300 [2024-07-24 20:08:11.223144] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.300 [2024-07-24 20:08:11.223335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.300 [2024-07-24 20:08:11.223350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.300 [2024-07-24 20:08:11.235511] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.300 [2024-07-24 20:08:11.235829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.300 [2024-07-24 20:08:11.235845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.300 [2024-07-24 20:08:11.247992] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.300 [2024-07-24 20:08:11.248212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.300 [2024-07-24 20:08:11.248228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.561 [2024-07-24 20:08:11.261275] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.561 [2024-07-24 20:08:11.261656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.561 [2024-07-24 20:08:11.261673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.561 [2024-07-24 20:08:11.273620] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.561 [2024-07-24 20:08:11.273954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.561 [2024-07-24 20:08:11.273971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.561 [2024-07-24 20:08:11.285963] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.561 [2024-07-24 20:08:11.286254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.561 [2024-07-24 20:08:11.286270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.561 [2024-07-24 20:08:11.299279] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.561 [2024-07-24 20:08:11.299631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.561 [2024-07-24 20:08:11.299647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.561 [2024-07-24 20:08:11.311407] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.561 [2024-07-24 20:08:11.311787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.561 [2024-07-24 20:08:11.311803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.561 [2024-07-24 20:08:11.323726] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.561 [2024-07-24 20:08:11.324078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.561 [2024-07-24 20:08:11.324098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.561 [2024-07-24 20:08:11.336367] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.561 [2024-07-24 20:08:11.336704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.561 [2024-07-24 20:08:11.336721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.561 [2024-07-24 20:08:11.348712] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.561 [2024-07-24 20:08:11.348902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.561 [2024-07-24 20:08:11.348918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.561 [2024-07-24 20:08:11.360652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.561 [2024-07-24 20:08:11.360990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.561 [2024-07-24 20:08:11.361006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.561 [2024-07-24 20:08:11.372111] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.561 [2024-07-24 20:08:11.372228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.561 [2024-07-24 20:08:11.372243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.561 [2024-07-24 20:08:11.384662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.561 [2024-07-24 20:08:11.385036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.561 [2024-07-24 20:08:11.385053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.561 [2024-07-24 20:08:11.396176] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.561 [2024-07-24 20:08:11.396578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.561 [2024-07-24 20:08:11.396595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.561 [2024-07-24 20:08:11.407667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.561 [2024-07-24 20:08:11.408013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.561 [2024-07-24 20:08:11.408030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.561 [2024-07-24 20:08:11.419928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.561 [2024-07-24 20:08:11.420167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.561 [2024-07-24 20:08:11.420183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.561 [2024-07-24 20:08:11.432071] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.561 [2024-07-24 20:08:11.432305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.561 [2024-07-24 20:08:11.432320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.561 [2024-07-24 20:08:11.443721] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.561 [2024-07-24 20:08:11.444126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.561 [2024-07-24 20:08:11.444142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.561 [2024-07-24 20:08:11.455998] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.562 [2024-07-24 20:08:11.456256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.562 [2024-07-24 20:08:11.456272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.562 [2024-07-24 20:08:11.466826] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.562 [2024-07-24 20:08:11.467054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.562 [2024-07-24 20:08:11.467070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.562 [2024-07-24 20:08:11.478012] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.562 [2024-07-24 20:08:11.478339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.562 [2024-07-24 20:08:11.478356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.562 [2024-07-24 20:08:11.489363] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.562 [2024-07-24 20:08:11.489742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.562 [2024-07-24 20:08:11.489758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.562 [2024-07-24 20:08:11.500046] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.562 [2024-07-24 20:08:11.500413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.562 [2024-07-24 20:08:11.500429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.562 [2024-07-24 20:08:11.510998] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.562 [2024-07-24 20:08:11.511367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.562 [2024-07-24 20:08:11.511383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.823 [2024-07-24 20:08:11.522786] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.823 [2024-07-24 20:08:11.523037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.823 [2024-07-24 20:08:11.523056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.823 [2024-07-24 20:08:11.534374] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.823 [2024-07-24 20:08:11.534647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.823 [2024-07-24 20:08:11.534664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.823 [2024-07-24 20:08:11.545706] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.823 [2024-07-24 20:08:11.546319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.823 [2024-07-24 20:08:11.546336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.823 [2024-07-24 20:08:11.557880] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.823 [2024-07-24 20:08:11.558191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.823 [2024-07-24 20:08:11.558212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.823 [2024-07-24 20:08:11.570136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.823 [2024-07-24 20:08:11.570685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.823 [2024-07-24 20:08:11.570701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.823 [2024-07-24 20:08:11.582498] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.823 [2024-07-24 20:08:11.582842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.823 [2024-07-24 20:08:11.582858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.823 [2024-07-24 20:08:11.593743] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.824 [2024-07-24 20:08:11.594135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.824 [2024-07-24 20:08:11.594151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.824 [2024-07-24 20:08:11.605198] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.824 [2024-07-24 20:08:11.605579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.824 [2024-07-24 20:08:11.605595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.824 [2024-07-24 20:08:11.617735] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.824 [2024-07-24 20:08:11.618221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.824 [2024-07-24 20:08:11.618237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.824 [2024-07-24 20:08:11.630293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.824 [2024-07-24 20:08:11.630849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.824 [2024-07-24 20:08:11.630866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.824 [2024-07-24 20:08:11.642923] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.824 [2024-07-24 20:08:11.643207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.824 [2024-07-24 20:08:11.643223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.824 [2024-07-24 20:08:11.654594] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.824 [2024-07-24 20:08:11.654977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.824 [2024-07-24 20:08:11.654994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.824 [2024-07-24 20:08:11.666255] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.824 [2024-07-24 20:08:11.666598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.824 [2024-07-24 20:08:11.666615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.824 [2024-07-24 20:08:11.678120] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.824 [2024-07-24 20:08:11.678580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.824 [2024-07-24 20:08:11.678597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.824 [2024-07-24 20:08:11.689711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.824 [2024-07-24 20:08:11.690148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.824 [2024-07-24 20:08:11.690164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.824 [2024-07-24 20:08:11.701899] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.824 [2024-07-24 20:08:11.702325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.824 [2024-07-24 20:08:11.702341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.824 [2024-07-24 20:08:11.712986] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.824 [2024-07-24 20:08:11.713353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.824 [2024-07-24 20:08:11.713370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.824 [2024-07-24 20:08:11.724713] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.824 [2024-07-24 20:08:11.725114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.824 [2024-07-24 20:08:11.725131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.824 [2024-07-24 20:08:11.735719] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.824 [2024-07-24 20:08:11.736019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.824 [2024-07-24 20:08:11.736036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.824 [2024-07-24 20:08:11.746636] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.824 [2024-07-24 20:08:11.747062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.824 [2024-07-24 20:08:11.747079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.824 [2024-07-24 20:08:11.757622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.824 [2024-07-24 20:08:11.757984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.824 [2024-07-24 20:08:11.758001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.824 [2024-07-24 20:08:11.768130] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1754400) with pdu=0x2000190fef90 00:28:23.824 [2024-07-24 20:08:11.768290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.824 [2024-07-24 20:08:11.768305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.824 00:28:23.824 Latency(us) 00:28:23.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.824 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:23.824 nvme0n1 : 2.01 2527.78 315.97 0.00 0.00 6318.99 4369.07 19551.57 00:28:23.824 =================================================================================================================== 00:28:23.824 Total : 2527.78 315.97 0.00 0.00 6318.99 4369.07 19551.57 00:28:24.085 0 00:28:24.085 20:08:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:24.085 20:08:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:24.085 20:08:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:24.085 | .driver_specific 00:28:24.085 | .nvme_error 00:28:24.085 | .status_code 00:28:24.085 | .command_transient_transport_error' 00:28:24.085 20:08:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:24.085 20:08:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 163 > 0 )) 00:28:24.085 20:08:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3849028 00:28:24.085 20:08:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3849028 ']' 00:28:24.085 20:08:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3849028 00:28:24.085 20:08:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:24.085 20:08:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:24.085 20:08:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3849028 00:28:24.085 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:24.085 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:24.085 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3849028' 00:28:24.085 killing process with pid 3849028 00:28:24.085 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3849028 00:28:24.085 Received shutdown signal, test time was about 2.000000 seconds 00:28:24.085 00:28:24.085 Latency(us) 00:28:24.085 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.085 =================================================================================================================== 00:28:24.085 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:24.085 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3849028 00:28:24.346 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3846630 00:28:24.346 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3846630 ']' 00:28:24.346 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3846630 00:28:24.346 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:24.346 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:24.346 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3846630 00:28:24.346 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:24.346 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:24.346 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3846630' 00:28:24.346 killing process with pid 3846630 00:28:24.346 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3846630 00:28:24.346 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3846630 00:28:24.607 00:28:24.607 real 0m16.404s 00:28:24.607 user 0m32.285s 00:28:24.607 sys 0m3.228s 00:28:24.607 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:24.607 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:24.607 ************************************ 00:28:24.607 END TEST nvmf_digest_error 00:28:24.607 ************************************ 00:28:24.607 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:24.607 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:24.607 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:24.607 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:28:24.607 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:24.607 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:28:24.607 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:24.607 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:24.607 rmmod nvme_tcp 00:28:24.607 rmmod nvme_fabrics 00:28:24.607 rmmod nvme_keyring 00:28:24.607 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:24.607 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:28:24.607 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:28:24.607 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3846630 ']' 00:28:24.607 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3846630 00:28:24.607 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 3846630 ']' 00:28:24.607 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 3846630 00:28:24.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3846630) - No such process 00:28:24.607 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 3846630 is not found' 00:28:24.607 Process with pid 3846630 is not found 00:28:24.607 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:24.607 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:24.607 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:24.607 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:24.607 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:24.607 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:24.607 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:24.607 20:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:27.152 00:28:27.152 real 0m42.494s 00:28:27.152 user 1m6.614s 00:28:27.152 sys 0m11.941s 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:27.152 ************************************ 00:28:27.152 END TEST nvmf_digest 00:28:27.152 ************************************ 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.152 ************************************ 00:28:27.152 START TEST nvmf_bdevperf 00:28:27.152 ************************************ 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:27.152 * Looking for test storage... 00:28:27.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:27.152 20:08:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:33.738 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:33.738 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:33.738 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:33.738 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:33.738 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:33.738 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:33.739 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:33.739 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:33.739 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:33.739 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:33.739 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:33.739 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:28:33.739 00:28:33.739 --- 10.0.0.2 ping statistics --- 00:28:33.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.739 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:33.739 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:33.739 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.401 ms 00:28:33.739 00:28:33.739 --- 10.0.0.1 ping statistics --- 00:28:33.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.739 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3853958 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3853958 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 3853958 ']' 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:33.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:33.739 20:08:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:33.739 [2024-07-24 20:08:21.611813] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:28:33.739 [2024-07-24 20:08:21.611886] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:33.739 EAL: No free 2048 kB hugepages reported on node 1 00:28:34.000 [2024-07-24 20:08:21.699307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:34.000 [2024-07-24 20:08:21.793234] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:34.000 [2024-07-24 20:08:21.793297] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:34.000 [2024-07-24 20:08:21.793305] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:34.000 [2024-07-24 20:08:21.793312] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:34.000 [2024-07-24 20:08:21.793318] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:34.000 [2024-07-24 20:08:21.793460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:34.000 [2024-07-24 20:08:21.793626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:34.000 [2024-07-24 20:08:21.793627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:34.572 20:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:34.572 20:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:28:34.572 20:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:34.572 20:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:34.572 20:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:34.572 20:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:34.572 20:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:34.572 20:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.572 20:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:34.572 [2024-07-24 20:08:22.438197] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:34.572 20:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.572 20:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:34.572 20:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.572 20:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:34.572 Malloc0 00:28:34.572 20:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.572 20:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:34.572 20:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.572 20:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:34.572 20:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.572 20:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:34.572 20:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.572 20:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:34.572 20:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.572 20:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:34.572 20:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.572 20:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:34.572 [2024-07-24 20:08:22.509313] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:34.572 20:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.572 20:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:34.572 20:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:34.572 20:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:34.572 20:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:34.572 20:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:34.572 20:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:34.572 { 00:28:34.572 "params": { 00:28:34.572 "name": "Nvme$subsystem", 00:28:34.572 "trtype": "$TEST_TRANSPORT", 00:28:34.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.572 "adrfam": "ipv4", 00:28:34.572 "trsvcid": "$NVMF_PORT", 00:28:34.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.572 "hdgst": ${hdgst:-false}, 00:28:34.572 "ddgst": ${ddgst:-false} 00:28:34.572 }, 00:28:34.572 "method": "bdev_nvme_attach_controller" 00:28:34.572 } 00:28:34.572 EOF 00:28:34.572 )") 00:28:34.572 20:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:34.833 20:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:34.833 20:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:34.833 20:08:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:34.833 "params": { 00:28:34.833 "name": "Nvme1", 00:28:34.833 "trtype": "tcp", 00:28:34.833 "traddr": "10.0.0.2", 00:28:34.833 "adrfam": "ipv4", 00:28:34.833 "trsvcid": "4420", 00:28:34.833 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:34.833 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:34.833 "hdgst": false, 00:28:34.833 "ddgst": false 00:28:34.833 }, 00:28:34.833 "method": "bdev_nvme_attach_controller" 00:28:34.833 }' 00:28:34.833 [2024-07-24 20:08:22.572750] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:28:34.833 [2024-07-24 20:08:22.572821] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3854070 ] 00:28:34.833 EAL: No free 2048 kB hugepages reported on node 1 00:28:34.833 [2024-07-24 20:08:22.631841] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.833 [2024-07-24 20:08:22.696165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:35.093 Running I/O for 1 seconds... 00:28:36.034 00:28:36.034 Latency(us) 00:28:36.034 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:36.034 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:36.034 Verification LBA range: start 0x0 length 0x4000 00:28:36.034 Nvme1n1 : 1.01 9759.67 38.12 0.00 0.00 13049.38 1542.83 20643.84 00:28:36.034 =================================================================================================================== 00:28:36.034 Total : 9759.67 38.12 0.00 0.00 13049.38 1542.83 20643.84 00:28:36.034 20:08:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3854405 00:28:36.034 20:08:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:36.034 20:08:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:36.034 20:08:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:36.034 20:08:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:36.034 20:08:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:36.034 20:08:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:36.035 20:08:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:36.035 { 00:28:36.035 "params": { 00:28:36.035 "name": "Nvme$subsystem", 00:28:36.035 "trtype": "$TEST_TRANSPORT", 00:28:36.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.035 "adrfam": "ipv4", 00:28:36.035 "trsvcid": "$NVMF_PORT", 00:28:36.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.035 "hdgst": ${hdgst:-false}, 00:28:36.035 "ddgst": ${ddgst:-false} 00:28:36.035 }, 00:28:36.035 "method": "bdev_nvme_attach_controller" 00:28:36.035 } 00:28:36.035 EOF 00:28:36.035 )") 00:28:36.295 20:08:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:36.295 20:08:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:36.295 20:08:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:36.295 20:08:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:36.295 "params": { 00:28:36.295 "name": "Nvme1", 00:28:36.295 "trtype": "tcp", 00:28:36.295 "traddr": "10.0.0.2", 00:28:36.295 "adrfam": "ipv4", 00:28:36.295 "trsvcid": "4420", 00:28:36.295 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:36.295 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:36.295 "hdgst": false, 00:28:36.295 "ddgst": false 00:28:36.295 }, 00:28:36.295 "method": "bdev_nvme_attach_controller" 00:28:36.295 }' 00:28:36.295 [2024-07-24 20:08:24.032997] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:28:36.295 [2024-07-24 20:08:24.033052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3854405 ] 00:28:36.295 EAL: No free 2048 kB hugepages reported on node 1 00:28:36.295 [2024-07-24 20:08:24.091751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.295 [2024-07-24 20:08:24.155941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:36.556 Running I/O for 15 seconds... 00:28:39.105 20:08:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3853958 00:28:39.105 20:08:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:39.105 [2024-07-24 20:08:26.997389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.105 [2024-07-24 20:08:26.997433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.105 [2024-07-24 20:08:26.997453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.105 [2024-07-24 20:08:26.997463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.105 [2024-07-24 20:08:26.997480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.105 [2024-07-24 20:08:26.997488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.105 [2024-07-24 20:08:26.997500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.105 [2024-07-24 20:08:26.997509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.105 [2024-07-24 20:08:26.997519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.105 [2024-07-24 20:08:26.997528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.105 [2024-07-24 20:08:26.997538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.105 [2024-07-24 20:08:26.997545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.105 [2024-07-24 20:08:26.997555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.105 [2024-07-24 20:08:26.997566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.105 [2024-07-24 20:08:26.997577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.105 [2024-07-24 20:08:26.997585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.105 [2024-07-24 20:08:26.997595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.105 [2024-07-24 20:08:26.997603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.105 [2024-07-24 20:08:26.997614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.105 [2024-07-24 20:08:26.997623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.105 [2024-07-24 20:08:26.997632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.105 [2024-07-24 20:08:26.997641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.105 [2024-07-24 20:08:26.997653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.105 [2024-07-24 20:08:26.997662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.105 [2024-07-24 20:08:26.997673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.106 [2024-07-24 20:08:26.997682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.106 [2024-07-24 20:08:26.997693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.106 [2024-07-24 20:08:26.997705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.106 [2024-07-24 20:08:26.997716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.106 [2024-07-24 20:08:26.997727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.106 [2024-07-24 20:08:26.997742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.106 [2024-07-24 20:08:26.997750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.106 [2024-07-24 20:08:26.997761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.106 [2024-07-24 20:08:26.997768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.106 [2024-07-24 20:08:26.997782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.106 [2024-07-24 20:08:26.997794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.106 [2024-07-24 20:08:26.997805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.106 [2024-07-24 20:08:26.997814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.106 [2024-07-24 20:08:26.997825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.106 [2024-07-24 20:08:26.997835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.106 [2024-07-24 20:08:26.997846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.106 [2024-07-24 20:08:26.997855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.106 [2024-07-24 20:08:26.997865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.106 [2024-07-24 20:08:26.997873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.106 [2024-07-24 20:08:26.997883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.106 [2024-07-24 20:08:26.997891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.106 [2024-07-24 20:08:26.997900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.106 [2024-07-24 20:08:26.997908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.106 [2024-07-24 20:08:26.997917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.106 [2024-07-24 20:08:26.997924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.106 [2024-07-24 20:08:26.997933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.106 [2024-07-24 20:08:26.997941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.106 [2024-07-24 20:08:26.997951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.106 [2024-07-24 20:08:26.997959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.106 [2024-07-24 20:08:26.997968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.106 [2024-07-24 20:08:26.997977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.106 [2024-07-24 20:08:26.997986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.106 [2024-07-24 20:08:26.997993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.106 [2024-07-24 20:08:26.998002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.106 [2024-07-24 20:08:26.998009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.106 [2024-07-24 20:08:26.998018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.106 [2024-07-24 20:08:26.998025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.106 [2024-07-24 20:08:26.998035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.106 [2024-07-24 20:08:26.998042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.106 [2024-07-24 20:08:26.998051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.106 [2024-07-24 20:08:26.998058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.106 [2024-07-24 20:08:26.998068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.106 [2024-07-24 20:08:26.998075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.106 [2024-07-24 20:08:26.998084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.106 [2024-07-24 20:08:26.998091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.106 [2024-07-24 20:08:26.998100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.106 [2024-07-24 20:08:26.998107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.106 [2024-07-24 20:08:26.998116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.106 [2024-07-24 20:08:26.998123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.106 [2024-07-24 20:08:26.998133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.106 [2024-07-24 20:08:26.998140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.106 [2024-07-24 20:08:26.998149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.106 [2024-07-24 20:08:26.998156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.106 [2024-07-24 20:08:26.998165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.106 [2024-07-24 20:08:26.998172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.106 [2024-07-24 20:08:26.998183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.106 [2024-07-24 20:08:26.998190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.106 [2024-07-24 20:08:26.998205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.106 [2024-07-24 20:08:26.998213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.106 [2024-07-24 20:08:26.998222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.106 [2024-07-24 20:08:26.998229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.106 [2024-07-24 20:08:26.998238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.106 [2024-07-24 20:08:26.998245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.106 [2024-07-24 20:08:26.998255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.106 [2024-07-24 20:08:26.998262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.106 [2024-07-24 20:08:26.998271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.106 [2024-07-24 20:08:26.998278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.106 [2024-07-24 20:08:26.998287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.106 [2024-07-24 20:08:26.998294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.106 [2024-07-24 20:08:26.998303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.106 [2024-07-24 20:08:26.998310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.106 [2024-07-24 20:08:26.998319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.106 [2024-07-24 20:08:26.998326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.106 [2024-07-24 20:08:26.998335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.106 [2024-07-24 20:08:26.998342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.106 [2024-07-24 20:08:26.998352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.106 [2024-07-24 20:08:26.998359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.106 [2024-07-24 20:08:26.998368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.106 [2024-07-24 20:08:26.998375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.107 [2024-07-24 20:08:26.998384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.107 [2024-07-24 20:08:26.998391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.107 [2024-07-24 20:08:26.998402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.107 [2024-07-24 20:08:26.998409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.107 [2024-07-24 20:08:26.998419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.107 [2024-07-24 20:08:26.998426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.107 [2024-07-24 20:08:26.998435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.107 [2024-07-24 20:08:26.998442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.107 [2024-07-24 20:08:26.998451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.107 [2024-07-24 20:08:26.998458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.107 [2024-07-24 20:08:26.998468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.107 [2024-07-24 20:08:26.998475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.107 [2024-07-24 20:08:26.998484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.107 [2024-07-24 20:08:26.998490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.107 [2024-07-24 20:08:26.998500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.107 [2024-07-24 20:08:26.998508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.107 [2024-07-24 20:08:26.998517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.107 [2024-07-24 20:08:26.998524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.107 [2024-07-24 20:08:26.998533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.107 [2024-07-24 20:08:26.998540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.107 [2024-07-24 20:08:26.998549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.107 [2024-07-24 20:08:26.998557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.107 [2024-07-24 20:08:26.998566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.107 [2024-07-24 20:08:26.998573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.107 [2024-07-24 20:08:26.998582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.107 [2024-07-24 20:08:26.998589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.107 [2024-07-24 20:08:26.998599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.107 [2024-07-24 20:08:26.998609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.107 [2024-07-24 20:08:26.998618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.107 [2024-07-24 20:08:26.998625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.107 [2024-07-24 20:08:26.998634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.107 [2024-07-24 20:08:26.998641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.107 [2024-07-24 20:08:26.998650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.107 [2024-07-24 20:08:26.998658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.107 [2024-07-24 20:08:26.998667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.107 [2024-07-24 20:08:26.998674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.107 [2024-07-24 20:08:26.998683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.107 [2024-07-24 20:08:26.998690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.107 [2024-07-24 20:08:26.998699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.107 [2024-07-24 20:08:26.998707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.107 [2024-07-24 20:08:26.998716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.107 [2024-07-24 20:08:26.998723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.107 [2024-07-24 20:08:26.998732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.107 [2024-07-24 20:08:26.998739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.107 [2024-07-24 20:08:26.998748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.107 [2024-07-24 20:08:26.998755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.107 [2024-07-24 20:08:26.998764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.107 [2024-07-24 20:08:26.998772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.107 [2024-07-24 20:08:26.998780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.107 [2024-07-24 20:08:26.998788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.107 [2024-07-24 20:08:26.998796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.107 [2024-07-24 20:08:26.998804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.107 [2024-07-24 20:08:26.998815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.107 [2024-07-24 20:08:26.998822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.107 [2024-07-24 20:08:26.998831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.107 [2024-07-24 20:08:26.998837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.107 [2024-07-24 20:08:26.998846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.107 [2024-07-24 20:08:26.998855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.107 [2024-07-24 20:08:26.998865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.107 [2024-07-24 20:08:26.998872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.107 [2024-07-24 20:08:26.998881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.107 [2024-07-24 20:08:26.998888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.107 [2024-07-24 20:08:26.998897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.107 [2024-07-24 20:08:26.998904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.107 [2024-07-24 20:08:26.998913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.108 [2024-07-24 20:08:26.998921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.108 [2024-07-24 20:08:26.998930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.108 [2024-07-24 20:08:26.998937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.108 [2024-07-24 20:08:26.998946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.108 [2024-07-24 20:08:26.998953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.108 [2024-07-24 20:08:26.998963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.108 [2024-07-24 20:08:26.998970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.108 [2024-07-24 20:08:26.998979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.108 [2024-07-24 20:08:26.998986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.108 [2024-07-24 20:08:26.998995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.108 [2024-07-24 20:08:26.999002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.108 [2024-07-24 20:08:26.999012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.108 [2024-07-24 20:08:26.999021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.108 [2024-07-24 20:08:26.999030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.108 [2024-07-24 20:08:26.999037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.108 [2024-07-24 20:08:26.999046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.108 [2024-07-24 20:08:26.999054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.108 [2024-07-24 20:08:26.999063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.108 [2024-07-24 20:08:26.999070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.108 [2024-07-24 20:08:26.999080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.108 [2024-07-24 20:08:26.999086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.108 [2024-07-24 20:08:26.999095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.108 [2024-07-24 20:08:26.999103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.108 [2024-07-24 20:08:26.999112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.108 [2024-07-24 20:08:26.999119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.108 [2024-07-24 20:08:26.999129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.108 [2024-07-24 20:08:26.999136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.108 [2024-07-24 20:08:26.999145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.108 [2024-07-24 20:08:26.999153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.108 [2024-07-24 20:08:26.999162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.108 [2024-07-24 20:08:26.999170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.108 [2024-07-24 20:08:26.999179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.108 [2024-07-24 20:08:26.999186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.108 [2024-07-24 20:08:26.999195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.108 [2024-07-24 20:08:26.999209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.108 [2024-07-24 20:08:26.999218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.108 [2024-07-24 20:08:26.999226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.108 [2024-07-24 20:08:26.999235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.108 [2024-07-24 20:08:26.999246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.108 [2024-07-24 20:08:26.999256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.108 [2024-07-24 20:08:26.999264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.108 [2024-07-24 20:08:26.999273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.108 [2024-07-24 20:08:26.999281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.108 [2024-07-24 20:08:26.999290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.108 [2024-07-24 20:08:26.999297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.108 [2024-07-24 20:08:26.999306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.108 [2024-07-24 20:08:26.999313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.108 [2024-07-24 20:08:26.999323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.108 [2024-07-24 20:08:26.999330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.108 [2024-07-24 20:08:26.999339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.108 [2024-07-24 20:08:26.999346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.108 [2024-07-24 20:08:26.999355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.108 [2024-07-24 20:08:26.999363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.108 [2024-07-24 20:08:26.999373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.108 [2024-07-24 20:08:26.999380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.108 [2024-07-24 20:08:26.999389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.108 [2024-07-24 20:08:26.999396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.108 [2024-07-24 20:08:26.999405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.108 [2024-07-24 20:08:26.999413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.108 [2024-07-24 20:08:26.999422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.108 [2024-07-24 20:08:26.999429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.108 [2024-07-24 20:08:26.999438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.108 [2024-07-24 20:08:26.999445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.108 [2024-07-24 20:08:26.999456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.109 [2024-07-24 20:08:26.999463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.109 [2024-07-24 20:08:26.999473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.109 [2024-07-24 20:08:26.999480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.109 [2024-07-24 20:08:26.999489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.109 [2024-07-24 20:08:26.999496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.109 [2024-07-24 20:08:26.999505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.109 [2024-07-24 20:08:26.999513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.109 [2024-07-24 20:08:26.999522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.109 [2024-07-24 20:08:26.999529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.109 [2024-07-24 20:08:26.999538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.109 [2024-07-24 20:08:26.999545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.109 [2024-07-24 20:08:26.999554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.109 [2024-07-24 20:08:26.999562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.109 [2024-07-24 20:08:26.999571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.109 [2024-07-24 20:08:26.999578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.109 [2024-07-24 20:08:26.999587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.109 [2024-07-24 20:08:26.999594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.109 [2024-07-24 20:08:26.999603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.109 [2024-07-24 20:08:26.999611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.109 [2024-07-24 20:08:26.999621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.109 [2024-07-24 20:08:26.999628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.109 [2024-07-24 20:08:26.999637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89570 is same with the state(5) to be set 00:28:39.109 [2024-07-24 20:08:26.999645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:39.109 [2024-07-24 20:08:26.999651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:39.109 [2024-07-24 20:08:26.999657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78912 len:8 PRP1 0x0 PRP2 0x0 00:28:39.109 [2024-07-24 20:08:26.999668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.109 [2024-07-24 20:08:26.999706] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c89570 was disconnected and freed. reset controller. 00:28:39.109 [2024-07-24 20:08:27.004003] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.109 [2024-07-24 20:08:27.004055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.109 [2024-07-24 20:08:27.004927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.109 [2024-07-24 20:08:27.004944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.109 [2024-07-24 20:08:27.004953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.109 [2024-07-24 20:08:27.005170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.109 [2024-07-24 20:08:27.005393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.109 [2024-07-24 20:08:27.005403] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.109 [2024-07-24 20:08:27.005411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.109 [2024-07-24 20:08:27.008908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.109 [2024-07-24 20:08:27.018189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.109 [2024-07-24 20:08:27.018916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.109 [2024-07-24 20:08:27.018955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.109 [2024-07-24 20:08:27.018965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.109 [2024-07-24 20:08:27.019213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.109 [2024-07-24 20:08:27.019434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.109 [2024-07-24 20:08:27.019443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.109 [2024-07-24 20:08:27.019451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.109 [2024-07-24 20:08:27.022953] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.109 [2024-07-24 20:08:27.032025] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.109 [2024-07-24 20:08:27.032801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.109 [2024-07-24 20:08:27.032839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.109 [2024-07-24 20:08:27.032850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.109 [2024-07-24 20:08:27.033086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.109 [2024-07-24 20:08:27.033314] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.109 [2024-07-24 20:08:27.033324] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.109 [2024-07-24 20:08:27.033332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.109 [2024-07-24 20:08:27.036834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.109 [2024-07-24 20:08:27.045909] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.109 [2024-07-24 20:08:27.046646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.109 [2024-07-24 20:08:27.046684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.109 [2024-07-24 20:08:27.046694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.109 [2024-07-24 20:08:27.046930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.109 [2024-07-24 20:08:27.047151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.109 [2024-07-24 20:08:27.047160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.109 [2024-07-24 20:08:27.047168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.109 [2024-07-24 20:08:27.050674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.372 [2024-07-24 20:08:27.059748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.372 [2024-07-24 20:08:27.060544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.372 [2024-07-24 20:08:27.060581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.372 [2024-07-24 20:08:27.060592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.372 [2024-07-24 20:08:27.060828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.372 [2024-07-24 20:08:27.061048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.372 [2024-07-24 20:08:27.061058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.372 [2024-07-24 20:08:27.061065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.372 [2024-07-24 20:08:27.064570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.372 [2024-07-24 20:08:27.073638] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.372 [2024-07-24 20:08:27.074309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.372 [2024-07-24 20:08:27.074347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.372 [2024-07-24 20:08:27.074359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.372 [2024-07-24 20:08:27.074596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.372 [2024-07-24 20:08:27.074816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.372 [2024-07-24 20:08:27.074825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.372 [2024-07-24 20:08:27.074833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.372 [2024-07-24 20:08:27.078340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.372 [2024-07-24 20:08:27.087405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.372 [2024-07-24 20:08:27.088187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.372 [2024-07-24 20:08:27.088232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.372 [2024-07-24 20:08:27.088249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.372 [2024-07-24 20:08:27.088486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.372 [2024-07-24 20:08:27.088706] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.372 [2024-07-24 20:08:27.088716] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.372 [2024-07-24 20:08:27.088723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.372 [2024-07-24 20:08:27.092220] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.372 [2024-07-24 20:08:27.101286] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.372 [2024-07-24 20:08:27.102046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.372 [2024-07-24 20:08:27.102084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.372 [2024-07-24 20:08:27.102095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.372 [2024-07-24 20:08:27.102338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.372 [2024-07-24 20:08:27.102558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.372 [2024-07-24 20:08:27.102568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.372 [2024-07-24 20:08:27.102575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.372 [2024-07-24 20:08:27.106076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.372 [2024-07-24 20:08:27.115142] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.372 [2024-07-24 20:08:27.115912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.372 [2024-07-24 20:08:27.115950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.372 [2024-07-24 20:08:27.115961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.372 [2024-07-24 20:08:27.116197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.372 [2024-07-24 20:08:27.116427] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.372 [2024-07-24 20:08:27.116436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.372 [2024-07-24 20:08:27.116444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.372 [2024-07-24 20:08:27.119942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.372 [2024-07-24 20:08:27.129007] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.372 [2024-07-24 20:08:27.129778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.372 [2024-07-24 20:08:27.129816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.372 [2024-07-24 20:08:27.129827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.372 [2024-07-24 20:08:27.130062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.372 [2024-07-24 20:08:27.130291] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.372 [2024-07-24 20:08:27.130305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.372 [2024-07-24 20:08:27.130313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.372 [2024-07-24 20:08:27.133814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.372 [2024-07-24 20:08:27.142894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.372 [2024-07-24 20:08:27.143641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.372 [2024-07-24 20:08:27.143679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.372 [2024-07-24 20:08:27.143691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.372 [2024-07-24 20:08:27.143928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.372 [2024-07-24 20:08:27.144148] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.372 [2024-07-24 20:08:27.144158] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.372 [2024-07-24 20:08:27.144165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.372 [2024-07-24 20:08:27.147669] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.372 [2024-07-24 20:08:27.156739] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.372 [2024-07-24 20:08:27.157502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.372 [2024-07-24 20:08:27.157540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.372 [2024-07-24 20:08:27.157550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.372 [2024-07-24 20:08:27.157786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.372 [2024-07-24 20:08:27.158006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.372 [2024-07-24 20:08:27.158016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.372 [2024-07-24 20:08:27.158023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.372 [2024-07-24 20:08:27.161529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.372 [2024-07-24 20:08:27.170588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.372 [2024-07-24 20:08:27.171297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.372 [2024-07-24 20:08:27.171335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.372 [2024-07-24 20:08:27.171347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.372 [2024-07-24 20:08:27.171584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.372 [2024-07-24 20:08:27.171805] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.372 [2024-07-24 20:08:27.171814] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.372 [2024-07-24 20:08:27.171822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.372 [2024-07-24 20:08:27.175330] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.372 [2024-07-24 20:08:27.184403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.372 [2024-07-24 20:08:27.185167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.372 [2024-07-24 20:08:27.185211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.372 [2024-07-24 20:08:27.185222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.372 [2024-07-24 20:08:27.185458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.372 [2024-07-24 20:08:27.185678] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.372 [2024-07-24 20:08:27.185687] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.372 [2024-07-24 20:08:27.185695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.372 [2024-07-24 20:08:27.189192] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.372 [2024-07-24 20:08:27.198256] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.372 [2024-07-24 20:08:27.198961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.372 [2024-07-24 20:08:27.198998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.372 [2024-07-24 20:08:27.199009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.372 [2024-07-24 20:08:27.199252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.372 [2024-07-24 20:08:27.199473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.372 [2024-07-24 20:08:27.199482] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.372 [2024-07-24 20:08:27.199490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.372 [2024-07-24 20:08:27.202985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.372 [2024-07-24 20:08:27.212056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.372 [2024-07-24 20:08:27.212805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.372 [2024-07-24 20:08:27.212844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.372 [2024-07-24 20:08:27.212854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.372 [2024-07-24 20:08:27.213090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.372 [2024-07-24 20:08:27.213319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.372 [2024-07-24 20:08:27.213329] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.372 [2024-07-24 20:08:27.213336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.372 [2024-07-24 20:08:27.216834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.372 [2024-07-24 20:08:27.225896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.372 [2024-07-24 20:08:27.226634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.372 [2024-07-24 20:08:27.226672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.372 [2024-07-24 20:08:27.226682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.372 [2024-07-24 20:08:27.226923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.372 [2024-07-24 20:08:27.227143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.372 [2024-07-24 20:08:27.227152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.372 [2024-07-24 20:08:27.227159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.372 [2024-07-24 20:08:27.230664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.372 [2024-07-24 20:08:27.239741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.372 [2024-07-24 20:08:27.240483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.373 [2024-07-24 20:08:27.240521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.373 [2024-07-24 20:08:27.240532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.373 [2024-07-24 20:08:27.240767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.373 [2024-07-24 20:08:27.240988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.373 [2024-07-24 20:08:27.240997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.373 [2024-07-24 20:08:27.241004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.373 [2024-07-24 20:08:27.244515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.373 [2024-07-24 20:08:27.253597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.373 [2024-07-24 20:08:27.254302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.373 [2024-07-24 20:08:27.254341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.373 [2024-07-24 20:08:27.254352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.373 [2024-07-24 20:08:27.254592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.373 [2024-07-24 20:08:27.254812] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.373 [2024-07-24 20:08:27.254821] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.373 [2024-07-24 20:08:27.254829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.373 [2024-07-24 20:08:27.258338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.373 [2024-07-24 20:08:27.267412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.373 [2024-07-24 20:08:27.268159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.373 [2024-07-24 20:08:27.268197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.373 [2024-07-24 20:08:27.268218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.373 [2024-07-24 20:08:27.268455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.373 [2024-07-24 20:08:27.268675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.373 [2024-07-24 20:08:27.268685] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.373 [2024-07-24 20:08:27.268696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.373 [2024-07-24 20:08:27.272192] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.373 [2024-07-24 20:08:27.281347] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.373 [2024-07-24 20:08:27.282100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.373 [2024-07-24 20:08:27.282137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.373 [2024-07-24 20:08:27.282148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.373 [2024-07-24 20:08:27.282391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.373 [2024-07-24 20:08:27.282613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.373 [2024-07-24 20:08:27.282622] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.373 [2024-07-24 20:08:27.282630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.373 [2024-07-24 20:08:27.286127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.373 [2024-07-24 20:08:27.295195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.373 [2024-07-24 20:08:27.295943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.373 [2024-07-24 20:08:27.295981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.373 [2024-07-24 20:08:27.295992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.373 [2024-07-24 20:08:27.296237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.373 [2024-07-24 20:08:27.296459] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.373 [2024-07-24 20:08:27.296468] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.373 [2024-07-24 20:08:27.296476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.373 [2024-07-24 20:08:27.299974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.373 [2024-07-24 20:08:27.309040] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.373 [2024-07-24 20:08:27.309821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.373 [2024-07-24 20:08:27.309858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.373 [2024-07-24 20:08:27.309869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.373 [2024-07-24 20:08:27.310105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.373 [2024-07-24 20:08:27.310335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.373 [2024-07-24 20:08:27.310346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.373 [2024-07-24 20:08:27.310353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.373 [2024-07-24 20:08:27.313851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.373 [2024-07-24 20:08:27.322924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.373 [2024-07-24 20:08:27.323716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.373 [2024-07-24 20:08:27.323754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.373 [2024-07-24 20:08:27.323765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.634 [2024-07-24 20:08:27.324000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.634 [2024-07-24 20:08:27.324231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.634 [2024-07-24 20:08:27.324243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.635 [2024-07-24 20:08:27.324251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.635 [2024-07-24 20:08:27.327748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.635 [2024-07-24 20:08:27.336809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.635 [2024-07-24 20:08:27.337518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.635 [2024-07-24 20:08:27.337555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.635 [2024-07-24 20:08:27.337566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.635 [2024-07-24 20:08:27.337802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.635 [2024-07-24 20:08:27.338022] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.635 [2024-07-24 20:08:27.338032] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.635 [2024-07-24 20:08:27.338039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.635 [2024-07-24 20:08:27.341556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.635 [2024-07-24 20:08:27.350629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.635 [2024-07-24 20:08:27.351300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.635 [2024-07-24 20:08:27.351338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.635 [2024-07-24 20:08:27.351348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.635 [2024-07-24 20:08:27.351584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.635 [2024-07-24 20:08:27.351805] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.635 [2024-07-24 20:08:27.351814] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.635 [2024-07-24 20:08:27.351821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.635 [2024-07-24 20:08:27.355337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.635 [2024-07-24 20:08:27.364399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.635 [2024-07-24 20:08:27.365173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.635 [2024-07-24 20:08:27.365217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.635 [2024-07-24 20:08:27.365228] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.635 [2024-07-24 20:08:27.365468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.635 [2024-07-24 20:08:27.365688] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.635 [2024-07-24 20:08:27.365698] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.635 [2024-07-24 20:08:27.365705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.635 [2024-07-24 20:08:27.369209] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.635 [2024-07-24 20:08:27.378266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.635 [2024-07-24 20:08:27.379041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.635 [2024-07-24 20:08:27.379078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.635 [2024-07-24 20:08:27.379089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.635 [2024-07-24 20:08:27.379334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.635 [2024-07-24 20:08:27.379555] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.635 [2024-07-24 20:08:27.379564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.635 [2024-07-24 20:08:27.379572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.635 [2024-07-24 20:08:27.383069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.635 [2024-07-24 20:08:27.392134] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.635 [2024-07-24 20:08:27.392929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.635 [2024-07-24 20:08:27.392967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.635 [2024-07-24 20:08:27.392978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.635 [2024-07-24 20:08:27.393223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.635 [2024-07-24 20:08:27.393444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.635 [2024-07-24 20:08:27.393454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.635 [2024-07-24 20:08:27.393461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.635 [2024-07-24 20:08:27.396960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.635 [2024-07-24 20:08:27.406024] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.635 [2024-07-24 20:08:27.406808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.635 [2024-07-24 20:08:27.406846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.635 [2024-07-24 20:08:27.406857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.635 [2024-07-24 20:08:27.407093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.635 [2024-07-24 20:08:27.407331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.635 [2024-07-24 20:08:27.407343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.635 [2024-07-24 20:08:27.407355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.635 [2024-07-24 20:08:27.410856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.635 [2024-07-24 20:08:27.419932] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.635 [2024-07-24 20:08:27.420696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.635 [2024-07-24 20:08:27.420734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.635 [2024-07-24 20:08:27.420745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.635 [2024-07-24 20:08:27.420982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.635 [2024-07-24 20:08:27.421210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.635 [2024-07-24 20:08:27.421220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.635 [2024-07-24 20:08:27.421228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.635 [2024-07-24 20:08:27.424723] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.635 [2024-07-24 20:08:27.433779] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.635 [2024-07-24 20:08:27.434375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.635 [2024-07-24 20:08:27.434412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.635 [2024-07-24 20:08:27.434423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.635 [2024-07-24 20:08:27.434660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.635 [2024-07-24 20:08:27.434880] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.635 [2024-07-24 20:08:27.434889] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.635 [2024-07-24 20:08:27.434897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.635 [2024-07-24 20:08:27.438409] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.635 [2024-07-24 20:08:27.447680] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.635 [2024-07-24 20:08:27.448414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.635 [2024-07-24 20:08:27.448451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.635 [2024-07-24 20:08:27.448462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.635 [2024-07-24 20:08:27.448698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.635 [2024-07-24 20:08:27.448918] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.635 [2024-07-24 20:08:27.448927] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.635 [2024-07-24 20:08:27.448935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.635 [2024-07-24 20:08:27.452442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.635 [2024-07-24 20:08:27.461515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.635 [2024-07-24 20:08:27.462157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.635 [2024-07-24 20:08:27.462180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.635 [2024-07-24 20:08:27.462188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.635 [2024-07-24 20:08:27.462412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.635 [2024-07-24 20:08:27.462629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.635 [2024-07-24 20:08:27.462639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.635 [2024-07-24 20:08:27.462646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.636 [2024-07-24 20:08:27.466136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.636 [2024-07-24 20:08:27.475402] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.636 [2024-07-24 20:08:27.476078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.636 [2024-07-24 20:08:27.476094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.636 [2024-07-24 20:08:27.476101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.636 [2024-07-24 20:08:27.476323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.636 [2024-07-24 20:08:27.476540] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.636 [2024-07-24 20:08:27.476548] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.636 [2024-07-24 20:08:27.476555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.636 [2024-07-24 20:08:27.480044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.636 [2024-07-24 20:08:27.489308] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.636 [2024-07-24 20:08:27.490055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.636 [2024-07-24 20:08:27.490092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.636 [2024-07-24 20:08:27.490103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.636 [2024-07-24 20:08:27.490349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.636 [2024-07-24 20:08:27.490571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.636 [2024-07-24 20:08:27.490581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.636 [2024-07-24 20:08:27.490588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.636 [2024-07-24 20:08:27.494084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.636 [2024-07-24 20:08:27.503147] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.636 [2024-07-24 20:08:27.503926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.636 [2024-07-24 20:08:27.503964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.636 [2024-07-24 20:08:27.503974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.636 [2024-07-24 20:08:27.504221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.636 [2024-07-24 20:08:27.504451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.636 [2024-07-24 20:08:27.504460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.636 [2024-07-24 20:08:27.504468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.636 [2024-07-24 20:08:27.507971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.636 [2024-07-24 20:08:27.517057] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.636 [2024-07-24 20:08:27.517764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.636 [2024-07-24 20:08:27.517784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.636 [2024-07-24 20:08:27.517792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.636 [2024-07-24 20:08:27.518008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.636 [2024-07-24 20:08:27.518230] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.636 [2024-07-24 20:08:27.518239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.636 [2024-07-24 20:08:27.518247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.636 [2024-07-24 20:08:27.521742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.636 [2024-07-24 20:08:27.530809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.636 [2024-07-24 20:08:27.531584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.636 [2024-07-24 20:08:27.531622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.636 [2024-07-24 20:08:27.531632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.636 [2024-07-24 20:08:27.531868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.636 [2024-07-24 20:08:27.532088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.636 [2024-07-24 20:08:27.532098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.636 [2024-07-24 20:08:27.532105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.636 [2024-07-24 20:08:27.535613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.636 [2024-07-24 20:08:27.544699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.636 [2024-07-24 20:08:27.545481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.636 [2024-07-24 20:08:27.545519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.636 [2024-07-24 20:08:27.545530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.636 [2024-07-24 20:08:27.545765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.636 [2024-07-24 20:08:27.545985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.636 [2024-07-24 20:08:27.545994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.636 [2024-07-24 20:08:27.546002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.636 [2024-07-24 20:08:27.549522] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.636 [2024-07-24 20:08:27.558604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.636 [2024-07-24 20:08:27.559368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.636 [2024-07-24 20:08:27.559406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.636 [2024-07-24 20:08:27.559416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.636 [2024-07-24 20:08:27.559652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.636 [2024-07-24 20:08:27.559871] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.636 [2024-07-24 20:08:27.559881] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.636 [2024-07-24 20:08:27.559889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.636 [2024-07-24 20:08:27.563395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.636 [2024-07-24 20:08:27.572455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.636 [2024-07-24 20:08:27.573239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.636 [2024-07-24 20:08:27.573278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.636 [2024-07-24 20:08:27.573290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.636 [2024-07-24 20:08:27.573529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.636 [2024-07-24 20:08:27.573748] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.636 [2024-07-24 20:08:27.573758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.636 [2024-07-24 20:08:27.573766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.636 [2024-07-24 20:08:27.577272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.636 [2024-07-24 20:08:27.586339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.899 [2024-07-24 20:08:27.587116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.899 [2024-07-24 20:08:27.587154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.899 [2024-07-24 20:08:27.587166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.899 [2024-07-24 20:08:27.587413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.899 [2024-07-24 20:08:27.587635] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.899 [2024-07-24 20:08:27.587644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.899 [2024-07-24 20:08:27.587652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.899 [2024-07-24 20:08:27.591150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.899 [2024-07-24 20:08:27.600217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.899 [2024-07-24 20:08:27.601001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.899 [2024-07-24 20:08:27.601038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.899 [2024-07-24 20:08:27.601054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.899 [2024-07-24 20:08:27.601299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.899 [2024-07-24 20:08:27.601521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.899 [2024-07-24 20:08:27.601530] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.899 [2024-07-24 20:08:27.601538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.899 [2024-07-24 20:08:27.605033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.899 [2024-07-24 20:08:27.614099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.899 [2024-07-24 20:08:27.614833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.899 [2024-07-24 20:08:27.614871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.899 [2024-07-24 20:08:27.614883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.899 [2024-07-24 20:08:27.615120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.899 [2024-07-24 20:08:27.615351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.899 [2024-07-24 20:08:27.615362] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.899 [2024-07-24 20:08:27.615369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.899 [2024-07-24 20:08:27.618864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.899 [2024-07-24 20:08:27.627924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.899 [2024-07-24 20:08:27.628706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.899 [2024-07-24 20:08:27.628744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.899 [2024-07-24 20:08:27.628755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.899 [2024-07-24 20:08:27.628991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.899 [2024-07-24 20:08:27.629220] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.899 [2024-07-24 20:08:27.629230] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.899 [2024-07-24 20:08:27.629237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.899 [2024-07-24 20:08:27.632735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.899 [2024-07-24 20:08:27.641805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.899 [2024-07-24 20:08:27.642483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.899 [2024-07-24 20:08:27.642521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.899 [2024-07-24 20:08:27.642532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.899 [2024-07-24 20:08:27.642768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.899 [2024-07-24 20:08:27.642988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.899 [2024-07-24 20:08:27.643002] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.899 [2024-07-24 20:08:27.643010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.899 [2024-07-24 20:08:27.646517] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.899 [2024-07-24 20:08:27.655590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.899 [2024-07-24 20:08:27.656278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.899 [2024-07-24 20:08:27.656297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.899 [2024-07-24 20:08:27.656305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.899 [2024-07-24 20:08:27.656522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.899 [2024-07-24 20:08:27.656739] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.899 [2024-07-24 20:08:27.656747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.899 [2024-07-24 20:08:27.656754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.899 [2024-07-24 20:08:27.660249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.899 [2024-07-24 20:08:27.669333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.899 [2024-07-24 20:08:27.670106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.899 [2024-07-24 20:08:27.670143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.899 [2024-07-24 20:08:27.670154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.899 [2024-07-24 20:08:27.670400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.899 [2024-07-24 20:08:27.670622] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.899 [2024-07-24 20:08:27.670632] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.899 [2024-07-24 20:08:27.670639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.899 [2024-07-24 20:08:27.674138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.899 [2024-07-24 20:08:27.683205] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.899 [2024-07-24 20:08:27.683989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.899 [2024-07-24 20:08:27.684027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.899 [2024-07-24 20:08:27.684037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.899 [2024-07-24 20:08:27.684283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.899 [2024-07-24 20:08:27.684504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.899 [2024-07-24 20:08:27.684514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.899 [2024-07-24 20:08:27.684521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.899 [2024-07-24 20:08:27.688017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.899 [2024-07-24 20:08:27.697083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.899 [2024-07-24 20:08:27.697856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.899 [2024-07-24 20:08:27.697893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.899 [2024-07-24 20:08:27.697904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.899 [2024-07-24 20:08:27.698139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.899 [2024-07-24 20:08:27.698369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.899 [2024-07-24 20:08:27.698380] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.899 [2024-07-24 20:08:27.698388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.899 [2024-07-24 20:08:27.701885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.900 [2024-07-24 20:08:27.710949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.900 [2024-07-24 20:08:27.711911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.900 [2024-07-24 20:08:27.711949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.900 [2024-07-24 20:08:27.711959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.900 [2024-07-24 20:08:27.712195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.900 [2024-07-24 20:08:27.712425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.900 [2024-07-24 20:08:27.712436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.900 [2024-07-24 20:08:27.712443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.900 [2024-07-24 20:08:27.715940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.900 [2024-07-24 20:08:27.724804] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.900 [2024-07-24 20:08:27.725547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.900 [2024-07-24 20:08:27.725585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.900 [2024-07-24 20:08:27.725595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.900 [2024-07-24 20:08:27.725831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.900 [2024-07-24 20:08:27.726052] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.900 [2024-07-24 20:08:27.726061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.900 [2024-07-24 20:08:27.726068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.900 [2024-07-24 20:08:27.729572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.900 [2024-07-24 20:08:27.738644] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.900 [2024-07-24 20:08:27.739429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.900 [2024-07-24 20:08:27.739467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.900 [2024-07-24 20:08:27.739478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.900 [2024-07-24 20:08:27.739718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.900 [2024-07-24 20:08:27.739938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.900 [2024-07-24 20:08:27.739947] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.900 [2024-07-24 20:08:27.739955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.900 [2024-07-24 20:08:27.743461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.900 [2024-07-24 20:08:27.752535] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.900 [2024-07-24 20:08:27.753300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.900 [2024-07-24 20:08:27.753338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.900 [2024-07-24 20:08:27.753350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.900 [2024-07-24 20:08:27.753590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.900 [2024-07-24 20:08:27.753810] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.900 [2024-07-24 20:08:27.753820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.900 [2024-07-24 20:08:27.753828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.900 [2024-07-24 20:08:27.757347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.900 [2024-07-24 20:08:27.766425] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.900 [2024-07-24 20:08:27.767111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.900 [2024-07-24 20:08:27.767129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.900 [2024-07-24 20:08:27.767137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.900 [2024-07-24 20:08:27.767360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.900 [2024-07-24 20:08:27.767578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.900 [2024-07-24 20:08:27.767586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.900 [2024-07-24 20:08:27.767593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.900 [2024-07-24 20:08:27.771100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.900 [2024-07-24 20:08:27.780156] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.900 [2024-07-24 20:08:27.780717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.900 [2024-07-24 20:08:27.780734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.900 [2024-07-24 20:08:27.780742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.900 [2024-07-24 20:08:27.780958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.900 [2024-07-24 20:08:27.781174] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.900 [2024-07-24 20:08:27.781183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.900 [2024-07-24 20:08:27.781194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.900 [2024-07-24 20:08:27.784692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.900 [2024-07-24 20:08:27.793987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.900 [2024-07-24 20:08:27.794760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.900 [2024-07-24 20:08:27.794798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.900 [2024-07-24 20:08:27.794809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.900 [2024-07-24 20:08:27.795045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.900 [2024-07-24 20:08:27.795275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.900 [2024-07-24 20:08:27.795285] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.900 [2024-07-24 20:08:27.795292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.900 [2024-07-24 20:08:27.798790] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.900 [2024-07-24 20:08:27.807849] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.900 [2024-07-24 20:08:27.808611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.900 [2024-07-24 20:08:27.808649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.900 [2024-07-24 20:08:27.808660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.900 [2024-07-24 20:08:27.808895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.900 [2024-07-24 20:08:27.809116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.900 [2024-07-24 20:08:27.809125] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.900 [2024-07-24 20:08:27.809133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.900 [2024-07-24 20:08:27.812640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.900 [2024-07-24 20:08:27.821704] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.900 [2024-07-24 20:08:27.822496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.900 [2024-07-24 20:08:27.822534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.900 [2024-07-24 20:08:27.822544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.900 [2024-07-24 20:08:27.822780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.900 [2024-07-24 20:08:27.823000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.900 [2024-07-24 20:08:27.823009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.900 [2024-07-24 20:08:27.823017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.900 [2024-07-24 20:08:27.826527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.900 [2024-07-24 20:08:27.835591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.900 [2024-07-24 20:08:27.836246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.900 [2024-07-24 20:08:27.836266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.900 [2024-07-24 20:08:27.836274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.900 [2024-07-24 20:08:27.836490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:39.900 [2024-07-24 20:08:27.836706] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.900 [2024-07-24 20:08:27.836715] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.900 [2024-07-24 20:08:27.836722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.900 [2024-07-24 20:08:27.840230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.900 [2024-07-24 20:08:27.849503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.901 [2024-07-24 20:08:27.850246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.901 [2024-07-24 20:08:27.850284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:39.901 [2024-07-24 20:08:27.850296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:39.901 [2024-07-24 20:08:27.850533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.163 [2024-07-24 20:08:27.850753] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.163 [2024-07-24 20:08:27.850764] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.163 [2024-07-24 20:08:27.850773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.163 [2024-07-24 20:08:27.854285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.163 [2024-07-24 20:08:27.863368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.163 [2024-07-24 20:08:27.864152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.163 [2024-07-24 20:08:27.864190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.163 [2024-07-24 20:08:27.864212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.163 [2024-07-24 20:08:27.864450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.163 [2024-07-24 20:08:27.864670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.163 [2024-07-24 20:08:27.864679] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.163 [2024-07-24 20:08:27.864687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.163 [2024-07-24 20:08:27.868181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.163 [2024-07-24 20:08:27.877244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.163 [2024-07-24 20:08:27.878031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.163 [2024-07-24 20:08:27.878068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.163 [2024-07-24 20:08:27.878079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.163 [2024-07-24 20:08:27.878325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.163 [2024-07-24 20:08:27.878550] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.163 [2024-07-24 20:08:27.878559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.163 [2024-07-24 20:08:27.878567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.163 [2024-07-24 20:08:27.882063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.163 [2024-07-24 20:08:27.891123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.163 [2024-07-24 20:08:27.891907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.163 [2024-07-24 20:08:27.891945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.163 [2024-07-24 20:08:27.891956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.163 [2024-07-24 20:08:27.892191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.163 [2024-07-24 20:08:27.892422] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.163 [2024-07-24 20:08:27.892432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.163 [2024-07-24 20:08:27.892439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.163 [2024-07-24 20:08:27.895933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.163 [2024-07-24 20:08:27.904997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.163 [2024-07-24 20:08:27.905747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.163 [2024-07-24 20:08:27.905786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.163 [2024-07-24 20:08:27.905796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.163 [2024-07-24 20:08:27.906032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.163 [2024-07-24 20:08:27.906260] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.163 [2024-07-24 20:08:27.906271] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.163 [2024-07-24 20:08:27.906279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.163 [2024-07-24 20:08:27.909773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.164 [2024-07-24 20:08:27.918848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.164 [2024-07-24 20:08:27.919626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.164 [2024-07-24 20:08:27.919673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.164 [2024-07-24 20:08:27.919684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.164 [2024-07-24 20:08:27.919919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.164 [2024-07-24 20:08:27.920140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.164 [2024-07-24 20:08:27.920150] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.164 [2024-07-24 20:08:27.920157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.164 [2024-07-24 20:08:27.923680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.164 [2024-07-24 20:08:27.932759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.164 [2024-07-24 20:08:27.933528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.164 [2024-07-24 20:08:27.933565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.164 [2024-07-24 20:08:27.933576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.164 [2024-07-24 20:08:27.933812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.164 [2024-07-24 20:08:27.934032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.164 [2024-07-24 20:08:27.934041] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.164 [2024-07-24 20:08:27.934049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.164 [2024-07-24 20:08:27.937552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.164 [2024-07-24 20:08:27.946618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.164 [2024-07-24 20:08:27.947376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.164 [2024-07-24 20:08:27.947414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.164 [2024-07-24 20:08:27.947427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.164 [2024-07-24 20:08:27.947664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.164 [2024-07-24 20:08:27.947884] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.164 [2024-07-24 20:08:27.947893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.164 [2024-07-24 20:08:27.947901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.164 [2024-07-24 20:08:27.951403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.164 [2024-07-24 20:08:27.960469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.164 [2024-07-24 20:08:27.961117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.164 [2024-07-24 20:08:27.961136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.164 [2024-07-24 20:08:27.961144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.164 [2024-07-24 20:08:27.961368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.164 [2024-07-24 20:08:27.961586] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.164 [2024-07-24 20:08:27.961595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.164 [2024-07-24 20:08:27.961602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.164 [2024-07-24 20:08:27.965094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.164 [2024-07-24 20:08:27.974360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.164 [2024-07-24 20:08:27.975038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.164 [2024-07-24 20:08:27.975054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.164 [2024-07-24 20:08:27.975067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.164 [2024-07-24 20:08:27.975289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.164 [2024-07-24 20:08:27.975506] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.164 [2024-07-24 20:08:27.975514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.164 [2024-07-24 20:08:27.975521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.164 [2024-07-24 20:08:27.979009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.164 [2024-07-24 20:08:27.988272] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.164 [2024-07-24 20:08:27.988944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.164 [2024-07-24 20:08:27.988959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.164 [2024-07-24 20:08:27.988967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.164 [2024-07-24 20:08:27.989182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.164 [2024-07-24 20:08:27.989404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.164 [2024-07-24 20:08:27.989415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.164 [2024-07-24 20:08:27.989421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.164 [2024-07-24 20:08:27.992915] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.164 [2024-07-24 20:08:28.002189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.164 [2024-07-24 20:08:28.002871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.164 [2024-07-24 20:08:28.002888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.164 [2024-07-24 20:08:28.002896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.164 [2024-07-24 20:08:28.003111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.164 [2024-07-24 20:08:28.003333] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.164 [2024-07-24 20:08:28.003343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.164 [2024-07-24 20:08:28.003350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.164 [2024-07-24 20:08:28.006845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.164 [2024-07-24 20:08:28.016114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.164 [2024-07-24 20:08:28.016773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.164 [2024-07-24 20:08:28.016789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.164 [2024-07-24 20:08:28.016797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.164 [2024-07-24 20:08:28.017012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.164 [2024-07-24 20:08:28.017237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.164 [2024-07-24 20:08:28.017246] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.164 [2024-07-24 20:08:28.017253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.164 [2024-07-24 20:08:28.020749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.164 [2024-07-24 20:08:28.030021] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.164 [2024-07-24 20:08:28.030716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.164 [2024-07-24 20:08:28.030732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.164 [2024-07-24 20:08:28.030740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.164 [2024-07-24 20:08:28.030955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.164 [2024-07-24 20:08:28.031172] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.164 [2024-07-24 20:08:28.031180] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.164 [2024-07-24 20:08:28.031187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.164 [2024-07-24 20:08:28.034685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.164 [2024-07-24 20:08:28.043844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.164 [2024-07-24 20:08:28.044606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.164 [2024-07-24 20:08:28.044644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.164 [2024-07-24 20:08:28.044655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.164 [2024-07-24 20:08:28.044892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.164 [2024-07-24 20:08:28.045113] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.164 [2024-07-24 20:08:28.045122] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.164 [2024-07-24 20:08:28.045130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.164 [2024-07-24 20:08:28.048669] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.164 [2024-07-24 20:08:28.057755] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.164 [2024-07-24 20:08:28.058468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.165 [2024-07-24 20:08:28.058507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.165 [2024-07-24 20:08:28.058519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.165 [2024-07-24 20:08:28.058755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.165 [2024-07-24 20:08:28.058975] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.165 [2024-07-24 20:08:28.058984] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.165 [2024-07-24 20:08:28.058992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.165 [2024-07-24 20:08:28.062506] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.165 [2024-07-24 20:08:28.071596] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.165 [2024-07-24 20:08:28.072387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.165 [2024-07-24 20:08:28.072425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.165 [2024-07-24 20:08:28.072437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.165 [2024-07-24 20:08:28.072674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.165 [2024-07-24 20:08:28.072895] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.165 [2024-07-24 20:08:28.072904] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.165 [2024-07-24 20:08:28.072912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.165 [2024-07-24 20:08:28.076427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.165 [2024-07-24 20:08:28.085505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.165 [2024-07-24 20:08:28.086198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.165 [2024-07-24 20:08:28.086223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.165 [2024-07-24 20:08:28.086231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.165 [2024-07-24 20:08:28.086448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.165 [2024-07-24 20:08:28.086665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.165 [2024-07-24 20:08:28.086673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.165 [2024-07-24 20:08:28.086681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.165 [2024-07-24 20:08:28.090194] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.165 [2024-07-24 20:08:28.099278] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.165 [2024-07-24 20:08:28.099951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.165 [2024-07-24 20:08:28.099967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.165 [2024-07-24 20:08:28.099975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.165 [2024-07-24 20:08:28.100191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.165 [2024-07-24 20:08:28.100414] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.165 [2024-07-24 20:08:28.100424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.165 [2024-07-24 20:08:28.100431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.165 [2024-07-24 20:08:28.103923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.165 [2024-07-24 20:08:28.113210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.165 [2024-07-24 20:08:28.113954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.165 [2024-07-24 20:08:28.113991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.165 [2024-07-24 20:08:28.114007] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.165 [2024-07-24 20:08:28.114252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.165 [2024-07-24 20:08:28.114473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.165 [2024-07-24 20:08:28.114483] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.165 [2024-07-24 20:08:28.114490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.427 [2024-07-24 20:08:28.117992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.427 [2024-07-24 20:08:28.127071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.427 [2024-07-24 20:08:28.127743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.427 [2024-07-24 20:08:28.127763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.427 [2024-07-24 20:08:28.127771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.427 [2024-07-24 20:08:28.127988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.427 [2024-07-24 20:08:28.128212] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.427 [2024-07-24 20:08:28.128221] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.427 [2024-07-24 20:08:28.128228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.427 [2024-07-24 20:08:28.131722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.427 [2024-07-24 20:08:28.141015] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.427 [2024-07-24 20:08:28.141677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.427 [2024-07-24 20:08:28.141693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.427 [2024-07-24 20:08:28.141701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.427 [2024-07-24 20:08:28.141917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.427 [2024-07-24 20:08:28.142134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.427 [2024-07-24 20:08:28.142142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.427 [2024-07-24 20:08:28.142149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.427 [2024-07-24 20:08:28.145646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.427 [2024-07-24 20:08:28.154942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.427 [2024-07-24 20:08:28.155601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.427 [2024-07-24 20:08:28.155619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.427 [2024-07-24 20:08:28.155627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.427 [2024-07-24 20:08:28.155843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.427 [2024-07-24 20:08:28.156059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.427 [2024-07-24 20:08:28.156072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.427 [2024-07-24 20:08:28.156079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.427 [2024-07-24 20:08:28.159589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.427 [2024-07-24 20:08:28.168867] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.427 [2024-07-24 20:08:28.169521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.427 [2024-07-24 20:08:28.169538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.427 [2024-07-24 20:08:28.169546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.427 [2024-07-24 20:08:28.169761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.427 [2024-07-24 20:08:28.169978] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.427 [2024-07-24 20:08:28.169987] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.428 [2024-07-24 20:08:28.169994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.428 [2024-07-24 20:08:28.173495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.428 [2024-07-24 20:08:28.182777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.428 [2024-07-24 20:08:28.183545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.428 [2024-07-24 20:08:28.183583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.428 [2024-07-24 20:08:28.183595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.428 [2024-07-24 20:08:28.183832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.428 [2024-07-24 20:08:28.184052] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.428 [2024-07-24 20:08:28.184061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.428 [2024-07-24 20:08:28.184069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.428 [2024-07-24 20:08:28.187572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.428 [2024-07-24 20:08:28.196633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.428 [2024-07-24 20:08:28.197486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.428 [2024-07-24 20:08:28.197524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.428 [2024-07-24 20:08:28.197535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.428 [2024-07-24 20:08:28.197771] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.428 [2024-07-24 20:08:28.197991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.428 [2024-07-24 20:08:28.198001] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.428 [2024-07-24 20:08:28.198009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.428 [2024-07-24 20:08:28.201508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.428 [2024-07-24 20:08:28.210583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.428 [2024-07-24 20:08:28.211311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.428 [2024-07-24 20:08:28.211350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.428 [2024-07-24 20:08:28.211362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.428 [2024-07-24 20:08:28.211601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.428 [2024-07-24 20:08:28.211821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.428 [2024-07-24 20:08:28.211831] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.428 [2024-07-24 20:08:28.211839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.428 [2024-07-24 20:08:28.215337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.428 [2024-07-24 20:08:28.224405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.428 [2024-07-24 20:08:28.225073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.428 [2024-07-24 20:08:28.225093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.428 [2024-07-24 20:08:28.225101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.428 [2024-07-24 20:08:28.225325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.428 [2024-07-24 20:08:28.225543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.428 [2024-07-24 20:08:28.225552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.428 [2024-07-24 20:08:28.225559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.428 [2024-07-24 20:08:28.229055] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.428 [2024-07-24 20:08:28.238342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.428 [2024-07-24 20:08:28.238979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.428 [2024-07-24 20:08:28.238996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.428 [2024-07-24 20:08:28.239004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.428 [2024-07-24 20:08:28.239254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.428 [2024-07-24 20:08:28.239472] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.428 [2024-07-24 20:08:28.239482] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.428 [2024-07-24 20:08:28.239489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.428 [2024-07-24 20:08:28.242983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.428 [2024-07-24 20:08:28.252259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.428 [2024-07-24 20:08:28.252948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.428 [2024-07-24 20:08:28.252985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.428 [2024-07-24 20:08:28.252996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.428 [2024-07-24 20:08:28.253245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.428 [2024-07-24 20:08:28.253467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.428 [2024-07-24 20:08:28.253476] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.428 [2024-07-24 20:08:28.253484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.428 [2024-07-24 20:08:28.256990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.428 [2024-07-24 20:08:28.266071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.428 [2024-07-24 20:08:28.266699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.428 [2024-07-24 20:08:28.266719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.428 [2024-07-24 20:08:28.266726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.428 [2024-07-24 20:08:28.266943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.428 [2024-07-24 20:08:28.267159] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.428 [2024-07-24 20:08:28.267168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.428 [2024-07-24 20:08:28.267175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.428 [2024-07-24 20:08:28.270676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.428 [2024-07-24 20:08:28.279957] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.428 [2024-07-24 20:08:28.280595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.428 [2024-07-24 20:08:28.280634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.428 [2024-07-24 20:08:28.280644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.428 [2024-07-24 20:08:28.280880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.428 [2024-07-24 20:08:28.281100] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.428 [2024-07-24 20:08:28.281110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.428 [2024-07-24 20:08:28.281117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.428 [2024-07-24 20:08:28.284630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.428 [2024-07-24 20:08:28.293716] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.428 [2024-07-24 20:08:28.294499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.428 [2024-07-24 20:08:28.294537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.428 [2024-07-24 20:08:28.294548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.428 [2024-07-24 20:08:28.294784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.428 [2024-07-24 20:08:28.295005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.428 [2024-07-24 20:08:28.295014] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.428 [2024-07-24 20:08:28.295027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.428 [2024-07-24 20:08:28.298538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.428 [2024-07-24 20:08:28.307654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.428 [2024-07-24 20:08:28.308505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.428 [2024-07-24 20:08:28.308543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.428 [2024-07-24 20:08:28.308553] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.428 [2024-07-24 20:08:28.308788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.428 [2024-07-24 20:08:28.309008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.428 [2024-07-24 20:08:28.309018] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.428 [2024-07-24 20:08:28.309026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.429 [2024-07-24 20:08:28.312531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.429 [2024-07-24 20:08:28.321596] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.429 [2024-07-24 20:08:28.322401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.429 [2024-07-24 20:08:28.322440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.429 [2024-07-24 20:08:28.322450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.429 [2024-07-24 20:08:28.322686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.429 [2024-07-24 20:08:28.322906] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.429 [2024-07-24 20:08:28.322915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.429 [2024-07-24 20:08:28.322923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.429 [2024-07-24 20:08:28.326431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.429 [2024-07-24 20:08:28.335506] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.429 [2024-07-24 20:08:28.336287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.429 [2024-07-24 20:08:28.336325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.429 [2024-07-24 20:08:28.336337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.429 [2024-07-24 20:08:28.336574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.429 [2024-07-24 20:08:28.336795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.429 [2024-07-24 20:08:28.336805] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.429 [2024-07-24 20:08:28.336812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.429 [2024-07-24 20:08:28.340330] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.429 [2024-07-24 20:08:28.349395] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.429 [2024-07-24 20:08:28.349938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.429 [2024-07-24 20:08:28.349962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.429 [2024-07-24 20:08:28.349971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.429 [2024-07-24 20:08:28.350187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.429 [2024-07-24 20:08:28.350410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.429 [2024-07-24 20:08:28.350419] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.429 [2024-07-24 20:08:28.350426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.429 [2024-07-24 20:08:28.353917] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.429 [2024-07-24 20:08:28.363185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.429 [2024-07-24 20:08:28.363783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.429 [2024-07-24 20:08:28.363800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.429 [2024-07-24 20:08:28.363807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.429 [2024-07-24 20:08:28.364023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.429 [2024-07-24 20:08:28.364243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.429 [2024-07-24 20:08:28.364253] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.429 [2024-07-24 20:08:28.364260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.429 [2024-07-24 20:08:28.367749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.429 [2024-07-24 20:08:28.377055] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.429 [2024-07-24 20:08:28.377792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.429 [2024-07-24 20:08:28.377830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.429 [2024-07-24 20:08:28.377841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.429 [2024-07-24 20:08:28.378076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.429 [2024-07-24 20:08:28.378304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.429 [2024-07-24 20:08:28.378314] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.429 [2024-07-24 20:08:28.378321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.691 [2024-07-24 20:08:28.381818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.691 [2024-07-24 20:08:28.390890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.691 [2024-07-24 20:08:28.391454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.691 [2024-07-24 20:08:28.391474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.691 [2024-07-24 20:08:28.391482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.691 [2024-07-24 20:08:28.391699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.691 [2024-07-24 20:08:28.391920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.691 [2024-07-24 20:08:28.391929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.691 [2024-07-24 20:08:28.391936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.691 [2024-07-24 20:08:28.395430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.691 [2024-07-24 20:08:28.404695] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.691 [2024-07-24 20:08:28.405487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.691 [2024-07-24 20:08:28.405525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.691 [2024-07-24 20:08:28.405537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.691 [2024-07-24 20:08:28.405773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.691 [2024-07-24 20:08:28.405994] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.691 [2024-07-24 20:08:28.406003] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.691 [2024-07-24 20:08:28.406010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.691 [2024-07-24 20:08:28.409513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.691 [2024-07-24 20:08:28.418577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.691 [2024-07-24 20:08:28.419397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.691 [2024-07-24 20:08:28.419435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.691 [2024-07-24 20:08:28.419446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.691 [2024-07-24 20:08:28.419681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.691 [2024-07-24 20:08:28.419901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.691 [2024-07-24 20:08:28.419912] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.691 [2024-07-24 20:08:28.419920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.691 [2024-07-24 20:08:28.423424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.691 [2024-07-24 20:08:28.432526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.691 [2024-07-24 20:08:28.433173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.691 [2024-07-24 20:08:28.433191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.691 [2024-07-24 20:08:28.433199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.691 [2024-07-24 20:08:28.433422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.691 [2024-07-24 20:08:28.433638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.691 [2024-07-24 20:08:28.433647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.691 [2024-07-24 20:08:28.433654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.691 [2024-07-24 20:08:28.437149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.691 [2024-07-24 20:08:28.446429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.691 [2024-07-24 20:08:28.447219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.691 [2024-07-24 20:08:28.447257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.691 [2024-07-24 20:08:28.447268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.691 [2024-07-24 20:08:28.447504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.691 [2024-07-24 20:08:28.447725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.691 [2024-07-24 20:08:28.447734] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.691 [2024-07-24 20:08:28.447742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.691 [2024-07-24 20:08:28.451251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.691 [2024-07-24 20:08:28.460327] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.691 [2024-07-24 20:08:28.460977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.691 [2024-07-24 20:08:28.460996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.691 [2024-07-24 20:08:28.461004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.691 [2024-07-24 20:08:28.461227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.691 [2024-07-24 20:08:28.461444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.691 [2024-07-24 20:08:28.461456] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.691 [2024-07-24 20:08:28.461464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.691 [2024-07-24 20:08:28.464958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.691 [2024-07-24 20:08:28.474220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.691 [2024-07-24 20:08:28.474776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.691 [2024-07-24 20:08:28.474792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.691 [2024-07-24 20:08:28.474800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.691 [2024-07-24 20:08:28.475016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.691 [2024-07-24 20:08:28.475237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.691 [2024-07-24 20:08:28.475246] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.691 [2024-07-24 20:08:28.475253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.691 [2024-07-24 20:08:28.478744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.691 [2024-07-24 20:08:28.488003] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.691 [2024-07-24 20:08:28.488768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.691 [2024-07-24 20:08:28.488806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.691 [2024-07-24 20:08:28.488821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.691 [2024-07-24 20:08:28.489057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.691 [2024-07-24 20:08:28.489283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.691 [2024-07-24 20:08:28.489293] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.691 [2024-07-24 20:08:28.489301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.691 [2024-07-24 20:08:28.492796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.691 [2024-07-24 20:08:28.501870] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.691 [2024-07-24 20:08:28.502526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.691 [2024-07-24 20:08:28.502564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.691 [2024-07-24 20:08:28.502575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.691 [2024-07-24 20:08:28.502811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.691 [2024-07-24 20:08:28.503032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.691 [2024-07-24 20:08:28.503042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.691 [2024-07-24 20:08:28.503049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.691 [2024-07-24 20:08:28.506555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.691 [2024-07-24 20:08:28.515619] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.691 [2024-07-24 20:08:28.516341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.691 [2024-07-24 20:08:28.516379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.691 [2024-07-24 20:08:28.516391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.691 [2024-07-24 20:08:28.516631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.691 [2024-07-24 20:08:28.516852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.691 [2024-07-24 20:08:28.516861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.691 [2024-07-24 20:08:28.516869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.691 [2024-07-24 20:08:28.520373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.692 [2024-07-24 20:08:28.529443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.692 [2024-07-24 20:08:28.530141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.692 [2024-07-24 20:08:28.530160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.692 [2024-07-24 20:08:28.530168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.692 [2024-07-24 20:08:28.530391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.692 [2024-07-24 20:08:28.530609] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.692 [2024-07-24 20:08:28.530623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.692 [2024-07-24 20:08:28.530630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.692 [2024-07-24 20:08:28.534122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.692 [2024-07-24 20:08:28.543195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.692 [2024-07-24 20:08:28.543926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.692 [2024-07-24 20:08:28.543964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.692 [2024-07-24 20:08:28.543974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.692 [2024-07-24 20:08:28.544219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.692 [2024-07-24 20:08:28.544439] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.692 [2024-07-24 20:08:28.544449] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.692 [2024-07-24 20:08:28.544457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.692 [2024-07-24 20:08:28.547953] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.692 [2024-07-24 20:08:28.557024] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.692 [2024-07-24 20:08:28.557656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.692 [2024-07-24 20:08:28.557675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.692 [2024-07-24 20:08:28.557683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.692 [2024-07-24 20:08:28.557899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.692 [2024-07-24 20:08:28.558123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.692 [2024-07-24 20:08:28.558133] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.692 [2024-07-24 20:08:28.558140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.692 [2024-07-24 20:08:28.561635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.692 [2024-07-24 20:08:28.570896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.692 [2024-07-24 20:08:28.571491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.692 [2024-07-24 20:08:28.571528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.692 [2024-07-24 20:08:28.571540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.692 [2024-07-24 20:08:28.571776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.692 [2024-07-24 20:08:28.571996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.692 [2024-07-24 20:08:28.572006] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.692 [2024-07-24 20:08:28.572013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.692 [2024-07-24 20:08:28.575518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.692 [2024-07-24 20:08:28.584799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.692 [2024-07-24 20:08:28.585512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.692 [2024-07-24 20:08:28.585550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.692 [2024-07-24 20:08:28.585560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.692 [2024-07-24 20:08:28.585796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.692 [2024-07-24 20:08:28.586017] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.692 [2024-07-24 20:08:28.586026] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.692 [2024-07-24 20:08:28.586034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.692 [2024-07-24 20:08:28.589539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.692 [2024-07-24 20:08:28.598613] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.692 [2024-07-24 20:08:28.599077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.692 [2024-07-24 20:08:28.599096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.692 [2024-07-24 20:08:28.599104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.692 [2024-07-24 20:08:28.599325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.692 [2024-07-24 20:08:28.599543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.692 [2024-07-24 20:08:28.599552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.692 [2024-07-24 20:08:28.599559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.692 [2024-07-24 20:08:28.603052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.692 [2024-07-24 20:08:28.612522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.692 [2024-07-24 20:08:28.613115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.692 [2024-07-24 20:08:28.613131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.692 [2024-07-24 20:08:28.613139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.692 [2024-07-24 20:08:28.613359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.692 [2024-07-24 20:08:28.613576] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.692 [2024-07-24 20:08:28.613584] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.692 [2024-07-24 20:08:28.613592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.692 [2024-07-24 20:08:28.617080] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.692 [2024-07-24 20:08:28.626342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.692 [2024-07-24 20:08:28.627123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.692 [2024-07-24 20:08:28.627161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.692 [2024-07-24 20:08:28.627177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.692 [2024-07-24 20:08:28.627422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.692 [2024-07-24 20:08:28.627644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.692 [2024-07-24 20:08:28.627653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.692 [2024-07-24 20:08:28.627660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.692 [2024-07-24 20:08:28.631158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.692 [2024-07-24 20:08:28.640233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.692 [2024-07-24 20:08:28.640886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.692 [2024-07-24 20:08:28.640905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.692 [2024-07-24 20:08:28.640913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.692 [2024-07-24 20:08:28.641130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.692 [2024-07-24 20:08:28.641351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.692 [2024-07-24 20:08:28.641361] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.692 [2024-07-24 20:08:28.641368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.954 [2024-07-24 20:08:28.644860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.954 [2024-07-24 20:08:28.654127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.954 [2024-07-24 20:08:28.654761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.954 [2024-07-24 20:08:28.654778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.954 [2024-07-24 20:08:28.654785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.954 [2024-07-24 20:08:28.655001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.954 [2024-07-24 20:08:28.655222] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.954 [2024-07-24 20:08:28.655231] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.954 [2024-07-24 20:08:28.655238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.954 [2024-07-24 20:08:28.658731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.954 [2024-07-24 20:08:28.667998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.954 [2024-07-24 20:08:28.668629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.954 [2024-07-24 20:08:28.668645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.954 [2024-07-24 20:08:28.668653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.954 [2024-07-24 20:08:28.668869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.954 [2024-07-24 20:08:28.669085] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.954 [2024-07-24 20:08:28.669093] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.954 [2024-07-24 20:08:28.669104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.954 [2024-07-24 20:08:28.672602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.954 [2024-07-24 20:08:28.681859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.954 [2024-07-24 20:08:28.682508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.954 [2024-07-24 20:08:28.682524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.954 [2024-07-24 20:08:28.682532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.954 [2024-07-24 20:08:28.682748] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.954 [2024-07-24 20:08:28.682964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.954 [2024-07-24 20:08:28.682972] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.954 [2024-07-24 20:08:28.682980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.954 [2024-07-24 20:08:28.686474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.954 [2024-07-24 20:08:28.695734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.954 [2024-07-24 20:08:28.696463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.954 [2024-07-24 20:08:28.696501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.954 [2024-07-24 20:08:28.696513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.954 [2024-07-24 20:08:28.696750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.954 [2024-07-24 20:08:28.696970] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.954 [2024-07-24 20:08:28.696980] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.954 [2024-07-24 20:08:28.696987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.954 [2024-07-24 20:08:28.700491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.955 [2024-07-24 20:08:28.709551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.955 [2024-07-24 20:08:28.710198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.955 [2024-07-24 20:08:28.710222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.955 [2024-07-24 20:08:28.710230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.955 [2024-07-24 20:08:28.710447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.955 [2024-07-24 20:08:28.710664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.955 [2024-07-24 20:08:28.710673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.955 [2024-07-24 20:08:28.710680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.955 [2024-07-24 20:08:28.714169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.955 [2024-07-24 20:08:28.723435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.955 [2024-07-24 20:08:28.723954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.955 [2024-07-24 20:08:28.723970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.955 [2024-07-24 20:08:28.723977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.955 [2024-07-24 20:08:28.724193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.955 [2024-07-24 20:08:28.724415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.955 [2024-07-24 20:08:28.724424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.955 [2024-07-24 20:08:28.724431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.955 [2024-07-24 20:08:28.727920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.955 [2024-07-24 20:08:28.737184] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.955 [2024-07-24 20:08:28.737859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.955 [2024-07-24 20:08:28.737874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.955 [2024-07-24 20:08:28.737882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.955 [2024-07-24 20:08:28.738098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.955 [2024-07-24 20:08:28.738317] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.955 [2024-07-24 20:08:28.738327] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.955 [2024-07-24 20:08:28.738334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.955 [2024-07-24 20:08:28.741833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.955 [2024-07-24 20:08:28.751100] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.955 [2024-07-24 20:08:28.751847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.955 [2024-07-24 20:08:28.751885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.955 [2024-07-24 20:08:28.751895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.955 [2024-07-24 20:08:28.752131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.955 [2024-07-24 20:08:28.752358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.955 [2024-07-24 20:08:28.752369] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.955 [2024-07-24 20:08:28.752376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.955 [2024-07-24 20:08:28.755871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.955 [2024-07-24 20:08:28.764949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.955 [2024-07-24 20:08:28.765692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.955 [2024-07-24 20:08:28.765730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.955 [2024-07-24 20:08:28.765741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.955 [2024-07-24 20:08:28.765981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.955 [2024-07-24 20:08:28.766210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.955 [2024-07-24 20:08:28.766220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.955 [2024-07-24 20:08:28.766227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.955 [2024-07-24 20:08:28.769723] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.955 [2024-07-24 20:08:28.778782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.955 [2024-07-24 20:08:28.779527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.955 [2024-07-24 20:08:28.779566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.955 [2024-07-24 20:08:28.779576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.955 [2024-07-24 20:08:28.779812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.955 [2024-07-24 20:08:28.780032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.955 [2024-07-24 20:08:28.780042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.955 [2024-07-24 20:08:28.780049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.955 [2024-07-24 20:08:28.783551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.955 [2024-07-24 20:08:28.792609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.955 [2024-07-24 20:08:28.793421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.955 [2024-07-24 20:08:28.793459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.955 [2024-07-24 20:08:28.793470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.955 [2024-07-24 20:08:28.793705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.955 [2024-07-24 20:08:28.793925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.955 [2024-07-24 20:08:28.793934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.955 [2024-07-24 20:08:28.793942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.955 [2024-07-24 20:08:28.797447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.955 [2024-07-24 20:08:28.806511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.955 [2024-07-24 20:08:28.807297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.955 [2024-07-24 20:08:28.807335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.955 [2024-07-24 20:08:28.807346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.955 [2024-07-24 20:08:28.807581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.955 [2024-07-24 20:08:28.807801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.955 [2024-07-24 20:08:28.807811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.955 [2024-07-24 20:08:28.807823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.955 [2024-07-24 20:08:28.811330] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.955 [2024-07-24 20:08:28.820274] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.955 [2024-07-24 20:08:28.821013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.955 [2024-07-24 20:08:28.821051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.955 [2024-07-24 20:08:28.821061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.955 [2024-07-24 20:08:28.821306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.955 [2024-07-24 20:08:28.821527] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.955 [2024-07-24 20:08:28.821537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.955 [2024-07-24 20:08:28.821544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.955 [2024-07-24 20:08:28.825040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.955 [2024-07-24 20:08:28.834111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.955 [2024-07-24 20:08:28.834854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.955 [2024-07-24 20:08:28.834892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.955 [2024-07-24 20:08:28.834903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.955 [2024-07-24 20:08:28.835139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.955 [2024-07-24 20:08:28.835366] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.955 [2024-07-24 20:08:28.835376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.955 [2024-07-24 20:08:28.835384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.955 [2024-07-24 20:08:28.838883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.955 [2024-07-24 20:08:28.847954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.955 [2024-07-24 20:08:28.848681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.956 [2024-07-24 20:08:28.848719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.956 [2024-07-24 20:08:28.848730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.956 [2024-07-24 20:08:28.848966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.956 [2024-07-24 20:08:28.849186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.956 [2024-07-24 20:08:28.849195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.956 [2024-07-24 20:08:28.849212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.956 [2024-07-24 20:08:28.852708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.956 [2024-07-24 20:08:28.861777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.956 [2024-07-24 20:08:28.862556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.956 [2024-07-24 20:08:28.862598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.956 [2024-07-24 20:08:28.862609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.956 [2024-07-24 20:08:28.862844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.956 [2024-07-24 20:08:28.863064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.956 [2024-07-24 20:08:28.863074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.956 [2024-07-24 20:08:28.863081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.956 [2024-07-24 20:08:28.866587] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.956 [2024-07-24 20:08:28.875652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.956 [2024-07-24 20:08:28.876500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.956 [2024-07-24 20:08:28.876538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.956 [2024-07-24 20:08:28.876548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.956 [2024-07-24 20:08:28.876784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.956 [2024-07-24 20:08:28.877004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.956 [2024-07-24 20:08:28.877014] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.956 [2024-07-24 20:08:28.877021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.956 [2024-07-24 20:08:28.880522] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.956 [2024-07-24 20:08:28.889583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.956 [2024-07-24 20:08:28.890280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.956 [2024-07-24 20:08:28.890319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.956 [2024-07-24 20:08:28.890331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.956 [2024-07-24 20:08:28.890567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.956 [2024-07-24 20:08:28.890788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.956 [2024-07-24 20:08:28.890797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.956 [2024-07-24 20:08:28.890804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.956 [2024-07-24 20:08:28.894308] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.956 [2024-07-24 20:08:28.903371] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.956 [2024-07-24 20:08:28.904107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.956 [2024-07-24 20:08:28.904145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:40.956 [2024-07-24 20:08:28.904155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:40.956 [2024-07-24 20:08:28.904404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:40.956 [2024-07-24 20:08:28.904633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.956 [2024-07-24 20:08:28.904643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.956 [2024-07-24 20:08:28.904651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.219 [2024-07-24 20:08:28.908148] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.219 [2024-07-24 20:08:28.917216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.219 [2024-07-24 20:08:28.917999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.219 [2024-07-24 20:08:28.918037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.219 [2024-07-24 20:08:28.918047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.219 [2024-07-24 20:08:28.918292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.219 [2024-07-24 20:08:28.918513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.219 [2024-07-24 20:08:28.918522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.219 [2024-07-24 20:08:28.918530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.219 [2024-07-24 20:08:28.922026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.219 [2024-07-24 20:08:28.931084] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.219 [2024-07-24 20:08:28.931849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.219 [2024-07-24 20:08:28.931887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.219 [2024-07-24 20:08:28.931898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.219 [2024-07-24 20:08:28.932133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.219 [2024-07-24 20:08:28.932362] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.219 [2024-07-24 20:08:28.932373] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.219 [2024-07-24 20:08:28.932380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.219 [2024-07-24 20:08:28.935879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.219 [2024-07-24 20:08:28.944952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.219 [2024-07-24 20:08:28.945697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.219 [2024-07-24 20:08:28.945735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.219 [2024-07-24 20:08:28.945745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.219 [2024-07-24 20:08:28.945981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.219 [2024-07-24 20:08:28.946211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.219 [2024-07-24 20:08:28.946221] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.219 [2024-07-24 20:08:28.946228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.219 [2024-07-24 20:08:28.949728] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.219 [2024-07-24 20:08:28.958791] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.219 [2024-07-24 20:08:28.959593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.219 [2024-07-24 20:08:28.959632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.219 [2024-07-24 20:08:28.959642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.219 [2024-07-24 20:08:28.959878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.219 [2024-07-24 20:08:28.960108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.219 [2024-07-24 20:08:28.960118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.219 [2024-07-24 20:08:28.960125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.219 [2024-07-24 20:08:28.963631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.219 [2024-07-24 20:08:28.972689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.219 [2024-07-24 20:08:28.973468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.219 [2024-07-24 20:08:28.973506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.219 [2024-07-24 20:08:28.973516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.219 [2024-07-24 20:08:28.973752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.219 [2024-07-24 20:08:28.973972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.219 [2024-07-24 20:08:28.973981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.219 [2024-07-24 20:08:28.973989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.219 [2024-07-24 20:08:28.977496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.219 [2024-07-24 20:08:28.986562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.219 [2024-07-24 20:08:28.987221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.219 [2024-07-24 20:08:28.987258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.219 [2024-07-24 20:08:28.987270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.219 [2024-07-24 20:08:28.987507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.219 [2024-07-24 20:08:28.987727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.219 [2024-07-24 20:08:28.987736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.219 [2024-07-24 20:08:28.987744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.219 [2024-07-24 20:08:28.991247] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.219 [2024-07-24 20:08:29.000310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.219 [2024-07-24 20:08:29.001094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.219 [2024-07-24 20:08:29.001132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.219 [2024-07-24 20:08:29.001147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.219 [2024-07-24 20:08:29.001392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.219 [2024-07-24 20:08:29.001613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.219 [2024-07-24 20:08:29.001622] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.219 [2024-07-24 20:08:29.001630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.219 [2024-07-24 20:08:29.005124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.219 [2024-07-24 20:08:29.014195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.219 [2024-07-24 20:08:29.014977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.219 [2024-07-24 20:08:29.015015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.219 [2024-07-24 20:08:29.015025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.219 [2024-07-24 20:08:29.015270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.219 [2024-07-24 20:08:29.015492] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.219 [2024-07-24 20:08:29.015502] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.219 [2024-07-24 20:08:29.015509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.219 [2024-07-24 20:08:29.019005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.219 [2024-07-24 20:08:29.028067] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.219 [2024-07-24 20:08:29.028860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.219 [2024-07-24 20:08:29.028898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.219 [2024-07-24 20:08:29.028908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.219 [2024-07-24 20:08:29.029144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.219 [2024-07-24 20:08:29.029371] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.219 [2024-07-24 20:08:29.029382] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.219 [2024-07-24 20:08:29.029389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.219 [2024-07-24 20:08:29.032887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.219 [2024-07-24 20:08:29.041965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.219 [2024-07-24 20:08:29.042614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.219 [2024-07-24 20:08:29.042652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.219 [2024-07-24 20:08:29.042662] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.219 [2024-07-24 20:08:29.042898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.219 [2024-07-24 20:08:29.043118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.219 [2024-07-24 20:08:29.043131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.219 [2024-07-24 20:08:29.043139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.219 [2024-07-24 20:08:29.046646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.219 [2024-07-24 20:08:29.055718] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.219 [2024-07-24 20:08:29.056493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.219 [2024-07-24 20:08:29.056531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.219 [2024-07-24 20:08:29.056542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.219 [2024-07-24 20:08:29.056778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.219 [2024-07-24 20:08:29.056998] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.219 [2024-07-24 20:08:29.057007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.219 [2024-07-24 20:08:29.057014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.219 [2024-07-24 20:08:29.060528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.219 [2024-07-24 20:08:29.069596] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.219 [2024-07-24 20:08:29.070277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.219 [2024-07-24 20:08:29.070296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.219 [2024-07-24 20:08:29.070304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.219 [2024-07-24 20:08:29.070521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.219 [2024-07-24 20:08:29.070738] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.219 [2024-07-24 20:08:29.070746] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.219 [2024-07-24 20:08:29.070753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.219 [2024-07-24 20:08:29.074278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.219 [2024-07-24 20:08:29.083430] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.219 [2024-07-24 20:08:29.084237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.219 [2024-07-24 20:08:29.084275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.219 [2024-07-24 20:08:29.084286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.219 [2024-07-24 20:08:29.084522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.219 [2024-07-24 20:08:29.084742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.219 [2024-07-24 20:08:29.084751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.219 [2024-07-24 20:08:29.084759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.219 [2024-07-24 20:08:29.088265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.219 [2024-07-24 20:08:29.097332] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.219 [2024-07-24 20:08:29.098063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.220 [2024-07-24 20:08:29.098100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.220 [2024-07-24 20:08:29.098111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.220 [2024-07-24 20:08:29.098356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.220 [2024-07-24 20:08:29.098578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.220 [2024-07-24 20:08:29.098587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.220 [2024-07-24 20:08:29.098595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.220 [2024-07-24 20:08:29.102092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.220 [2024-07-24 20:08:29.111156] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.220 [2024-07-24 20:08:29.111935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.220 [2024-07-24 20:08:29.111973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.220 [2024-07-24 20:08:29.111984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.220 [2024-07-24 20:08:29.112232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.220 [2024-07-24 20:08:29.112453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.220 [2024-07-24 20:08:29.112462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.220 [2024-07-24 20:08:29.112470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.220 [2024-07-24 20:08:29.115966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.220 [2024-07-24 20:08:29.125024] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.220 [2024-07-24 20:08:29.125787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.220 [2024-07-24 20:08:29.125825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.220 [2024-07-24 20:08:29.125836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.220 [2024-07-24 20:08:29.126071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.220 [2024-07-24 20:08:29.126303] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.220 [2024-07-24 20:08:29.126313] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.220 [2024-07-24 20:08:29.126321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.220 [2024-07-24 20:08:29.129818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.220 [2024-07-24 20:08:29.138882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.220 [2024-07-24 20:08:29.139627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.220 [2024-07-24 20:08:29.139665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.220 [2024-07-24 20:08:29.139676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.220 [2024-07-24 20:08:29.139916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.220 [2024-07-24 20:08:29.140137] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.220 [2024-07-24 20:08:29.140147] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.220 [2024-07-24 20:08:29.140154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.220 [2024-07-24 20:08:29.143667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.220 [2024-07-24 20:08:29.152725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.220 [2024-07-24 20:08:29.153501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.220 [2024-07-24 20:08:29.153539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.220 [2024-07-24 20:08:29.153549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.220 [2024-07-24 20:08:29.153785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.220 [2024-07-24 20:08:29.154005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.220 [2024-07-24 20:08:29.154015] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.220 [2024-07-24 20:08:29.154022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.220 [2024-07-24 20:08:29.157527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.220 [2024-07-24 20:08:29.166595] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.220 [2024-07-24 20:08:29.167144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.220 [2024-07-24 20:08:29.167180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.220 [2024-07-24 20:08:29.167193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.220 [2024-07-24 20:08:29.167439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.220 [2024-07-24 20:08:29.167660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.220 [2024-07-24 20:08:29.167669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.220 [2024-07-24 20:08:29.167677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.481 [2024-07-24 20:08:29.171176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.482 [2024-07-24 20:08:29.180445] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.482 [2024-07-24 20:08:29.181229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.482 [2024-07-24 20:08:29.181267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.482 [2024-07-24 20:08:29.181279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.482 [2024-07-24 20:08:29.181516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.482 [2024-07-24 20:08:29.181736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.482 [2024-07-24 20:08:29.181746] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.482 [2024-07-24 20:08:29.181760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.482 [2024-07-24 20:08:29.185263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.482 [2024-07-24 20:08:29.194323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.482 [2024-07-24 20:08:29.195051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.482 [2024-07-24 20:08:29.195089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.482 [2024-07-24 20:08:29.195100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.482 [2024-07-24 20:08:29.195344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.482 [2024-07-24 20:08:29.195566] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.482 [2024-07-24 20:08:29.195575] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.482 [2024-07-24 20:08:29.195583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.482 [2024-07-24 20:08:29.199082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.482 [2024-07-24 20:08:29.208345] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.482 [2024-07-24 20:08:29.209037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.482 [2024-07-24 20:08:29.209056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.482 [2024-07-24 20:08:29.209064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.482 [2024-07-24 20:08:29.209288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.482 [2024-07-24 20:08:29.209505] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.482 [2024-07-24 20:08:29.209514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.482 [2024-07-24 20:08:29.209521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.482 [2024-07-24 20:08:29.213010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.482 [2024-07-24 20:08:29.222282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.482 [2024-07-24 20:08:29.223028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.482 [2024-07-24 20:08:29.223066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.482 [2024-07-24 20:08:29.223076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.482 [2024-07-24 20:08:29.223321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.482 [2024-07-24 20:08:29.223542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.482 [2024-07-24 20:08:29.223552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.482 [2024-07-24 20:08:29.223560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.482 [2024-07-24 20:08:29.227057] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.482 [2024-07-24 20:08:29.236175] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.482 [2024-07-24 20:08:29.236978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.482 [2024-07-24 20:08:29.237016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.482 [2024-07-24 20:08:29.237027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.482 [2024-07-24 20:08:29.237274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.482 [2024-07-24 20:08:29.237495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.482 [2024-07-24 20:08:29.237505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.482 [2024-07-24 20:08:29.237512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.482 [2024-07-24 20:08:29.241027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.482 [2024-07-24 20:08:29.250102] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.482 [2024-07-24 20:08:29.250849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.482 [2024-07-24 20:08:29.250887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.482 [2024-07-24 20:08:29.250899] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.482 [2024-07-24 20:08:29.251136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.482 [2024-07-24 20:08:29.251365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.482 [2024-07-24 20:08:29.251375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.482 [2024-07-24 20:08:29.251383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.482 [2024-07-24 20:08:29.254878] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.482 [2024-07-24 20:08:29.263950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.482 [2024-07-24 20:08:29.264624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.482 [2024-07-24 20:08:29.264644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.482 [2024-07-24 20:08:29.264652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.482 [2024-07-24 20:08:29.264869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.482 [2024-07-24 20:08:29.265085] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.482 [2024-07-24 20:08:29.265094] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.482 [2024-07-24 20:08:29.265101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.482 [2024-07-24 20:08:29.268600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.482 [2024-07-24 20:08:29.277860] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.482 [2024-07-24 20:08:29.278605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.482 [2024-07-24 20:08:29.278643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.482 [2024-07-24 20:08:29.278653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.482 [2024-07-24 20:08:29.278893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.482 [2024-07-24 20:08:29.279114] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.482 [2024-07-24 20:08:29.279124] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.482 [2024-07-24 20:08:29.279131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.482 [2024-07-24 20:08:29.282635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.482 [2024-07-24 20:08:29.291700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.482 [2024-07-24 20:08:29.292475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.482 [2024-07-24 20:08:29.292513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.482 [2024-07-24 20:08:29.292523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.482 [2024-07-24 20:08:29.292759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.482 [2024-07-24 20:08:29.292979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.482 [2024-07-24 20:08:29.292988] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.482 [2024-07-24 20:08:29.292996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.482 [2024-07-24 20:08:29.296499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.482 [2024-07-24 20:08:29.305568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.482 [2024-07-24 20:08:29.306296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.482 [2024-07-24 20:08:29.306334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.482 [2024-07-24 20:08:29.306346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.482 [2024-07-24 20:08:29.306583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.482 [2024-07-24 20:08:29.306803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.482 [2024-07-24 20:08:29.306813] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.482 [2024-07-24 20:08:29.306820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.482 [2024-07-24 20:08:29.310330] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.483 [2024-07-24 20:08:29.319398] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.483 [2024-07-24 20:08:29.320082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.483 [2024-07-24 20:08:29.320101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.483 [2024-07-24 20:08:29.320109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.483 [2024-07-24 20:08:29.320334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.483 [2024-07-24 20:08:29.320551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.483 [2024-07-24 20:08:29.320560] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.483 [2024-07-24 20:08:29.320571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.483 [2024-07-24 20:08:29.324065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.483 [2024-07-24 20:08:29.333329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.483 [2024-07-24 20:08:29.334062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.483 [2024-07-24 20:08:29.334101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.483 [2024-07-24 20:08:29.334111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.483 [2024-07-24 20:08:29.334357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.483 [2024-07-24 20:08:29.334578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.483 [2024-07-24 20:08:29.334588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.483 [2024-07-24 20:08:29.334595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.483 [2024-07-24 20:08:29.338101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.483 [2024-07-24 20:08:29.347181] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.483 [2024-07-24 20:08:29.347907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.483 [2024-07-24 20:08:29.347944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.483 [2024-07-24 20:08:29.347955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.483 [2024-07-24 20:08:29.348190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.483 [2024-07-24 20:08:29.348420] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.483 [2024-07-24 20:08:29.348430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.483 [2024-07-24 20:08:29.348438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.483 [2024-07-24 20:08:29.351938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.483 [2024-07-24 20:08:29.361007] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.483 [2024-07-24 20:08:29.361733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.483 [2024-07-24 20:08:29.361771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.483 [2024-07-24 20:08:29.361782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.483 [2024-07-24 20:08:29.362017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.483 [2024-07-24 20:08:29.362248] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.483 [2024-07-24 20:08:29.362258] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.483 [2024-07-24 20:08:29.362266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.483 [2024-07-24 20:08:29.365763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.483 [2024-07-24 20:08:29.374826] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.483 [2024-07-24 20:08:29.375493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.483 [2024-07-24 20:08:29.375535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.483 [2024-07-24 20:08:29.375546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.483 [2024-07-24 20:08:29.375782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.483 [2024-07-24 20:08:29.376002] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.483 [2024-07-24 20:08:29.376012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.483 [2024-07-24 20:08:29.376019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.483 [2024-07-24 20:08:29.379525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.483 [2024-07-24 20:08:29.388588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.483 [2024-07-24 20:08:29.389303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.483 [2024-07-24 20:08:29.389341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.483 [2024-07-24 20:08:29.389351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.483 [2024-07-24 20:08:29.389587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.483 [2024-07-24 20:08:29.389807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.483 [2024-07-24 20:08:29.389816] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.483 [2024-07-24 20:08:29.389824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.483 [2024-07-24 20:08:29.393328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.483 [2024-07-24 20:08:29.402394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.483 [2024-07-24 20:08:29.403160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.483 [2024-07-24 20:08:29.403198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.483 [2024-07-24 20:08:29.403217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.483 [2024-07-24 20:08:29.403453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.483 [2024-07-24 20:08:29.403673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.483 [2024-07-24 20:08:29.403682] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.483 [2024-07-24 20:08:29.403690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.483 [2024-07-24 20:08:29.407186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.483 [2024-07-24 20:08:29.416248] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.483 [2024-07-24 20:08:29.417025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.483 [2024-07-24 20:08:29.417063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.483 [2024-07-24 20:08:29.417074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.483 [2024-07-24 20:08:29.417318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.483 [2024-07-24 20:08:29.417544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.483 [2024-07-24 20:08:29.417554] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.483 [2024-07-24 20:08:29.417561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.483 [2024-07-24 20:08:29.421057] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.483 [2024-07-24 20:08:29.430123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.483 [2024-07-24 20:08:29.430887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.483 [2024-07-24 20:08:29.430925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.483 [2024-07-24 20:08:29.430936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.483 [2024-07-24 20:08:29.431172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.483 [2024-07-24 20:08:29.431402] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.483 [2024-07-24 20:08:29.431412] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.483 [2024-07-24 20:08:29.431419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.745 [2024-07-24 20:08:29.434916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.745 [2024-07-24 20:08:29.443991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.745 [2024-07-24 20:08:29.444661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.745 [2024-07-24 20:08:29.444699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.745 [2024-07-24 20:08:29.444709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.745 [2024-07-24 20:08:29.444945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.745 [2024-07-24 20:08:29.445166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.745 [2024-07-24 20:08:29.445175] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.745 [2024-07-24 20:08:29.445182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.745 [2024-07-24 20:08:29.448689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.745 [2024-07-24 20:08:29.457753] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.745 [2024-07-24 20:08:29.458317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.745 [2024-07-24 20:08:29.458354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.745 [2024-07-24 20:08:29.458366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.745 [2024-07-24 20:08:29.458604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.745 [2024-07-24 20:08:29.458824] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.745 [2024-07-24 20:08:29.458835] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.745 [2024-07-24 20:08:29.458843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.745 [2024-07-24 20:08:29.462363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.745 [2024-07-24 20:08:29.471639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.745 [2024-07-24 20:08:29.472444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.745 [2024-07-24 20:08:29.472482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.745 [2024-07-24 20:08:29.472492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.745 [2024-07-24 20:08:29.472728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.745 [2024-07-24 20:08:29.472948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.745 [2024-07-24 20:08:29.472958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.745 [2024-07-24 20:08:29.472965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.745 [2024-07-24 20:08:29.476472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.746 [2024-07-24 20:08:29.485543] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.746 [2024-07-24 20:08:29.486291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.746 [2024-07-24 20:08:29.486329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.746 [2024-07-24 20:08:29.486341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.746 [2024-07-24 20:08:29.486578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.746 [2024-07-24 20:08:29.486798] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.746 [2024-07-24 20:08:29.486808] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.746 [2024-07-24 20:08:29.486815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.746 [2024-07-24 20:08:29.490321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.746 [2024-07-24 20:08:29.499387] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.746 [2024-07-24 20:08:29.500150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.746 [2024-07-24 20:08:29.500188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.746 [2024-07-24 20:08:29.500198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.746 [2024-07-24 20:08:29.500444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.746 [2024-07-24 20:08:29.500663] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.746 [2024-07-24 20:08:29.500673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.746 [2024-07-24 20:08:29.500680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.746 [2024-07-24 20:08:29.504176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.746 [2024-07-24 20:08:29.513236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.746 [2024-07-24 20:08:29.514021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.746 [2024-07-24 20:08:29.514059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.746 [2024-07-24 20:08:29.514074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.746 [2024-07-24 20:08:29.514319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.746 [2024-07-24 20:08:29.514539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.746 [2024-07-24 20:08:29.514549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.746 [2024-07-24 20:08:29.514556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.746 [2024-07-24 20:08:29.518052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.746 [2024-07-24 20:08:29.527112] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.746 [2024-07-24 20:08:29.527832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.746 [2024-07-24 20:08:29.527870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.746 [2024-07-24 20:08:29.527881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.746 [2024-07-24 20:08:29.528117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.746 [2024-07-24 20:08:29.528349] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.746 [2024-07-24 20:08:29.528359] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.746 [2024-07-24 20:08:29.528367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.746 [2024-07-24 20:08:29.531865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.746 [2024-07-24 20:08:29.540928] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.746 [2024-07-24 20:08:29.541630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.746 [2024-07-24 20:08:29.541668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.746 [2024-07-24 20:08:29.541678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.746 [2024-07-24 20:08:29.541914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.746 [2024-07-24 20:08:29.542134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.746 [2024-07-24 20:08:29.542144] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.746 [2024-07-24 20:08:29.542151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.746 [2024-07-24 20:08:29.545664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.746 [2024-07-24 20:08:29.554734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.746 [2024-07-24 20:08:29.555517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.746 [2024-07-24 20:08:29.555555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.746 [2024-07-24 20:08:29.555567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.746 [2024-07-24 20:08:29.555804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.746 [2024-07-24 20:08:29.556024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.746 [2024-07-24 20:08:29.556038] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.746 [2024-07-24 20:08:29.556046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.746 [2024-07-24 20:08:29.559549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.746 [2024-07-24 20:08:29.568620] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.746 [2024-07-24 20:08:29.569429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.746 [2024-07-24 20:08:29.569467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.746 [2024-07-24 20:08:29.569478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.746 [2024-07-24 20:08:29.569714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.746 [2024-07-24 20:08:29.569934] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.746 [2024-07-24 20:08:29.569944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.746 [2024-07-24 20:08:29.569952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.746 [2024-07-24 20:08:29.573458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.746 [2024-07-24 20:08:29.582521] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.746 [2024-07-24 20:08:29.583300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.746 [2024-07-24 20:08:29.583347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.746 [2024-07-24 20:08:29.583358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.746 [2024-07-24 20:08:29.583593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.746 [2024-07-24 20:08:29.583814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.746 [2024-07-24 20:08:29.583823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.746 [2024-07-24 20:08:29.583831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.746 [2024-07-24 20:08:29.587337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.746 [2024-07-24 20:08:29.596396] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.746 [2024-07-24 20:08:29.597172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.746 [2024-07-24 20:08:29.597216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.746 [2024-07-24 20:08:29.597228] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.746 [2024-07-24 20:08:29.597464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.746 [2024-07-24 20:08:29.597684] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.746 [2024-07-24 20:08:29.597693] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.746 [2024-07-24 20:08:29.597701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.746 [2024-07-24 20:08:29.601203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.746 [2024-07-24 20:08:29.610281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.746 [2024-07-24 20:08:29.611026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.746 [2024-07-24 20:08:29.611064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.746 [2024-07-24 20:08:29.611075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.746 [2024-07-24 20:08:29.611319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.746 [2024-07-24 20:08:29.611540] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.746 [2024-07-24 20:08:29.611550] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.746 [2024-07-24 20:08:29.611557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.746 [2024-07-24 20:08:29.615053] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.746 [2024-07-24 20:08:29.624113] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.746 [2024-07-24 20:08:29.624897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.747 [2024-07-24 20:08:29.624935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.747 [2024-07-24 20:08:29.624945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.747 [2024-07-24 20:08:29.625181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.747 [2024-07-24 20:08:29.625412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.747 [2024-07-24 20:08:29.625422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.747 [2024-07-24 20:08:29.625429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.747 [2024-07-24 20:08:29.628925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.747 [2024-07-24 20:08:29.637992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.747 [2024-07-24 20:08:29.638758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.747 [2024-07-24 20:08:29.638796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.747 [2024-07-24 20:08:29.638806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.747 [2024-07-24 20:08:29.639041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.747 [2024-07-24 20:08:29.639270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.747 [2024-07-24 20:08:29.639280] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.747 [2024-07-24 20:08:29.639288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.747 [2024-07-24 20:08:29.642794] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.747 [2024-07-24 20:08:29.651867] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.747 [2024-07-24 20:08:29.652615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.747 [2024-07-24 20:08:29.652653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.747 [2024-07-24 20:08:29.652664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.747 [2024-07-24 20:08:29.652903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.747 [2024-07-24 20:08:29.653123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.747 [2024-07-24 20:08:29.653133] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.747 [2024-07-24 20:08:29.653141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.747 [2024-07-24 20:08:29.656645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.747 [2024-07-24 20:08:29.665717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.747 [2024-07-24 20:08:29.666500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.747 [2024-07-24 20:08:29.666538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.747 [2024-07-24 20:08:29.666549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.747 [2024-07-24 20:08:29.666784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.747 [2024-07-24 20:08:29.667004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.747 [2024-07-24 20:08:29.667014] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.747 [2024-07-24 20:08:29.667021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.747 [2024-07-24 20:08:29.670528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.747 [2024-07-24 20:08:29.679588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.747 [2024-07-24 20:08:29.680256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.747 [2024-07-24 20:08:29.680281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.747 [2024-07-24 20:08:29.680290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.747 [2024-07-24 20:08:29.680511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.747 [2024-07-24 20:08:29.680729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.747 [2024-07-24 20:08:29.680737] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.747 [2024-07-24 20:08:29.680745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.747 [2024-07-24 20:08:29.684243] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.747 [2024-07-24 20:08:29.693509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.747 [2024-07-24 20:08:29.694274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.747 [2024-07-24 20:08:29.694311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:41.747 [2024-07-24 20:08:29.694322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:41.747 [2024-07-24 20:08:29.694558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:41.747 [2024-07-24 20:08:29.694778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.747 [2024-07-24 20:08:29.694787] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.747 [2024-07-24 20:08:29.694800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.009 [2024-07-24 20:08:29.698309] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.009 [2024-07-24 20:08:29.707371] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.009 [2024-07-24 20:08:29.708153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.009 [2024-07-24 20:08:29.708191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.009 [2024-07-24 20:08:29.708211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.009 [2024-07-24 20:08:29.708449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.009 [2024-07-24 20:08:29.708668] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.009 [2024-07-24 20:08:29.708678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.009 [2024-07-24 20:08:29.708685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.009 [2024-07-24 20:08:29.712182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.009 [2024-07-24 20:08:29.721284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.009 [2024-07-24 20:08:29.722063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.009 [2024-07-24 20:08:29.722101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.009 [2024-07-24 20:08:29.722112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.009 [2024-07-24 20:08:29.722357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.009 [2024-07-24 20:08:29.722578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.009 [2024-07-24 20:08:29.722587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.009 [2024-07-24 20:08:29.722595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.009 [2024-07-24 20:08:29.726091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.009 [2024-07-24 20:08:29.735154] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.009 [2024-07-24 20:08:29.735939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.009 [2024-07-24 20:08:29.735977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.009 [2024-07-24 20:08:29.735988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.009 [2024-07-24 20:08:29.736236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.009 [2024-07-24 20:08:29.736457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.009 [2024-07-24 20:08:29.736466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.009 [2024-07-24 20:08:29.736474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.009 [2024-07-24 20:08:29.739970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.009 [2024-07-24 20:08:29.749053] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.009 [2024-07-24 20:08:29.749808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.009 [2024-07-24 20:08:29.749846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.009 [2024-07-24 20:08:29.749857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.009 [2024-07-24 20:08:29.750093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.009 [2024-07-24 20:08:29.750321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.009 [2024-07-24 20:08:29.750331] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.009 [2024-07-24 20:08:29.750338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.009 [2024-07-24 20:08:29.753838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.009 [2024-07-24 20:08:29.762912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.009 [2024-07-24 20:08:29.763668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.009 [2024-07-24 20:08:29.763705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.009 [2024-07-24 20:08:29.763716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.009 [2024-07-24 20:08:29.763952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.009 [2024-07-24 20:08:29.764171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.009 [2024-07-24 20:08:29.764181] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.009 [2024-07-24 20:08:29.764188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.009 [2024-07-24 20:08:29.767691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.009 [2024-07-24 20:08:29.776788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.009 [2024-07-24 20:08:29.777530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.009 [2024-07-24 20:08:29.777568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.009 [2024-07-24 20:08:29.777579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.009 [2024-07-24 20:08:29.777816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.009 [2024-07-24 20:08:29.778036] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.010 [2024-07-24 20:08:29.778045] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.010 [2024-07-24 20:08:29.778053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.010 [2024-07-24 20:08:29.781555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.010 [2024-07-24 20:08:29.790622] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.010 [2024-07-24 20:08:29.791247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.010 [2024-07-24 20:08:29.791293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.010 [2024-07-24 20:08:29.791305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.010 [2024-07-24 20:08:29.791544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.010 [2024-07-24 20:08:29.791769] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.010 [2024-07-24 20:08:29.791779] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.010 [2024-07-24 20:08:29.791786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.010 [2024-07-24 20:08:29.795291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.010 [2024-07-24 20:08:29.804561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.010 [2024-07-24 20:08:29.805308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.010 [2024-07-24 20:08:29.805346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.010 [2024-07-24 20:08:29.805358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.010 [2024-07-24 20:08:29.805597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.010 [2024-07-24 20:08:29.805818] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.010 [2024-07-24 20:08:29.805828] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.010 [2024-07-24 20:08:29.805836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.010 [2024-07-24 20:08:29.809342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.010 [2024-07-24 20:08:29.818405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.010 [2024-07-24 20:08:29.819069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.010 [2024-07-24 20:08:29.819088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.010 [2024-07-24 20:08:29.819096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.010 [2024-07-24 20:08:29.819318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.010 [2024-07-24 20:08:29.819536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.010 [2024-07-24 20:08:29.819544] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.010 [2024-07-24 20:08:29.819551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.010 [2024-07-24 20:08:29.823041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.010 [2024-07-24 20:08:29.832303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.010 [2024-07-24 20:08:29.832882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.010 [2024-07-24 20:08:29.832899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.010 [2024-07-24 20:08:29.832906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.010 [2024-07-24 20:08:29.833122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.010 [2024-07-24 20:08:29.833343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.010 [2024-07-24 20:08:29.833353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.010 [2024-07-24 20:08:29.833360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.010 [2024-07-24 20:08:29.836853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.010 [2024-07-24 20:08:29.846251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.010 [2024-07-24 20:08:29.847033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.010 [2024-07-24 20:08:29.847071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.010 [2024-07-24 20:08:29.847084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.010 [2024-07-24 20:08:29.847328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.010 [2024-07-24 20:08:29.847550] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.010 [2024-07-24 20:08:29.847559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.010 [2024-07-24 20:08:29.847567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.010 [2024-07-24 20:08:29.851063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.010 [2024-07-24 20:08:29.860130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.010 [2024-07-24 20:08:29.860880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.010 [2024-07-24 20:08:29.860918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.010 [2024-07-24 20:08:29.860929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.010 [2024-07-24 20:08:29.861165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.010 [2024-07-24 20:08:29.861393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.010 [2024-07-24 20:08:29.861402] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.010 [2024-07-24 20:08:29.861410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.010 [2024-07-24 20:08:29.864916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.010 [2024-07-24 20:08:29.873986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.010 [2024-07-24 20:08:29.874752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.010 [2024-07-24 20:08:29.874791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.010 [2024-07-24 20:08:29.874801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.010 [2024-07-24 20:08:29.875037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.010 [2024-07-24 20:08:29.875264] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.010 [2024-07-24 20:08:29.875274] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.010 [2024-07-24 20:08:29.875281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.010 [2024-07-24 20:08:29.878781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.010 [2024-07-24 20:08:29.887849] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.010 [2024-07-24 20:08:29.888627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.010 [2024-07-24 20:08:29.888668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.010 [2024-07-24 20:08:29.888679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.010 [2024-07-24 20:08:29.888915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.010 [2024-07-24 20:08:29.889135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.010 [2024-07-24 20:08:29.889145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.010 [2024-07-24 20:08:29.889153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.010 [2024-07-24 20:08:29.892655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.010 [2024-07-24 20:08:29.901718] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.010 [2024-07-24 20:08:29.902520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.010 [2024-07-24 20:08:29.902558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.010 [2024-07-24 20:08:29.902569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.010 [2024-07-24 20:08:29.902804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.010 [2024-07-24 20:08:29.903024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.010 [2024-07-24 20:08:29.903034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.010 [2024-07-24 20:08:29.903042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.010 [2024-07-24 20:08:29.906548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.010 [2024-07-24 20:08:29.915615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.010 [2024-07-24 20:08:29.916319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.010 [2024-07-24 20:08:29.916357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.010 [2024-07-24 20:08:29.916369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.010 [2024-07-24 20:08:29.916606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.010 [2024-07-24 20:08:29.916826] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.011 [2024-07-24 20:08:29.916835] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.011 [2024-07-24 20:08:29.916843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.011 [2024-07-24 20:08:29.920348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.011 [2024-07-24 20:08:29.929416] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.011 [2024-07-24 20:08:29.930170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.011 [2024-07-24 20:08:29.930215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.011 [2024-07-24 20:08:29.930227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.011 [2024-07-24 20:08:29.930467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.011 [2024-07-24 20:08:29.930692] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.011 [2024-07-24 20:08:29.930701] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.011 [2024-07-24 20:08:29.930709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.011 [2024-07-24 20:08:29.934211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.011 [2024-07-24 20:08:29.943282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.011 [2024-07-24 20:08:29.944040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.011 [2024-07-24 20:08:29.944079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.011 [2024-07-24 20:08:29.944090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.011 [2024-07-24 20:08:29.944337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.011 [2024-07-24 20:08:29.944559] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.011 [2024-07-24 20:08:29.944569] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.011 [2024-07-24 20:08:29.944576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.011 [2024-07-24 20:08:29.948074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.011 [2024-07-24 20:08:29.957152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.011 [2024-07-24 20:08:29.957891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.011 [2024-07-24 20:08:29.957929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.011 [2024-07-24 20:08:29.957940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.011 [2024-07-24 20:08:29.958175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.011 [2024-07-24 20:08:29.958403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.011 [2024-07-24 20:08:29.958413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.011 [2024-07-24 20:08:29.958421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.273 [2024-07-24 20:08:29.961918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.273 [2024-07-24 20:08:29.971000] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.273 [2024-07-24 20:08:29.971699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.273 [2024-07-24 20:08:29.971718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.273 [2024-07-24 20:08:29.971726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.273 [2024-07-24 20:08:29.971943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.273 [2024-07-24 20:08:29.972159] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.273 [2024-07-24 20:08:29.972168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.273 [2024-07-24 20:08:29.972175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.273 [2024-07-24 20:08:29.975670] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.273 [2024-07-24 20:08:29.984736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.273 [2024-07-24 20:08:29.985558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.273 [2024-07-24 20:08:29.985596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.273 [2024-07-24 20:08:29.985608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.273 [2024-07-24 20:08:29.985846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.273 [2024-07-24 20:08:29.986067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.273 [2024-07-24 20:08:29.986076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.273 [2024-07-24 20:08:29.986084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.273 [2024-07-24 20:08:29.989587] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3853958 Killed "${NVMF_APP[@]}" "$@" 00:28:42.273 20:08:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:42.273 20:08:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:42.273 20:08:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:42.273 20:08:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:42.273 20:08:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:42.273 [2024-07-24 20:08:29.998649] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.273 [2024-07-24 20:08:29.999354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.273 [2024-07-24 20:08:29.999392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.273 [2024-07-24 20:08:29.999404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.273 [2024-07-24 20:08:29.999641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.273 [2024-07-24 20:08:29.999861] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.273 [2024-07-24 20:08:29.999871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.273 [2024-07-24 20:08:29.999879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.273 [2024-07-24 20:08:30.003843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.273 20:08:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3855519 00:28:42.273 20:08:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3855519 00:28:42.273 20:08:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:42.273 20:08:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 3855519 ']' 00:28:42.273 20:08:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:42.273 20:08:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:42.273 20:08:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:42.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:42.273 20:08:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:42.273 20:08:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:42.273 [2024-07-24 20:08:30.012520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.273 [2024-07-24 20:08:30.013234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.273 [2024-07-24 20:08:30.013256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.273 [2024-07-24 20:08:30.013265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.273 [2024-07-24 20:08:30.013485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.273 [2024-07-24 20:08:30.013702] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.273 [2024-07-24 20:08:30.013711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.273 [2024-07-24 20:08:30.013718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.273 [2024-07-24 20:08:30.017224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.273 [2024-07-24 20:08:30.026438] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.273 [2024-07-24 20:08:30.027109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.273 [2024-07-24 20:08:30.027128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.273 [2024-07-24 20:08:30.027136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.273 [2024-07-24 20:08:30.027360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.274 [2024-07-24 20:08:30.027577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.274 [2024-07-24 20:08:30.027586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.274 [2024-07-24 20:08:30.027594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.274 [2024-07-24 20:08:30.031087] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.274 [2024-07-24 20:08:30.040377] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.274 [2024-07-24 20:08:30.041021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-07-24 20:08:30.041037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.274 [2024-07-24 20:08:30.041046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.274 [2024-07-24 20:08:30.041270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.274 [2024-07-24 20:08:30.041486] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.274 [2024-07-24 20:08:30.041496] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.274 [2024-07-24 20:08:30.041503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.274 [2024-07-24 20:08:30.045013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.274 [2024-07-24 20:08:30.053812] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:28:42.274 [2024-07-24 20:08:30.053859] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:42.274 [2024-07-24 20:08:30.054304] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.274 [2024-07-24 20:08:30.055046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-07-24 20:08:30.055085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.274 [2024-07-24 20:08:30.055095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.274 [2024-07-24 20:08:30.055340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.274 [2024-07-24 20:08:30.055562] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.274 [2024-07-24 20:08:30.055571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.274 [2024-07-24 20:08:30.055578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.274 [2024-07-24 20:08:30.059076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.274 [2024-07-24 20:08:30.068157] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.274 [2024-07-24 20:08:30.069006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-07-24 20:08:30.069044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.274 [2024-07-24 20:08:30.069055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.274 [2024-07-24 20:08:30.069298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.274 [2024-07-24 20:08:30.069520] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.274 [2024-07-24 20:08:30.069529] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.274 [2024-07-24 20:08:30.069537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.274 [2024-07-24 20:08:30.073030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.274 [2024-07-24 20:08:30.081901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.274 [2024-07-24 20:08:30.082439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-07-24 20:08:30.082459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.274 [2024-07-24 20:08:30.082467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.274 [2024-07-24 20:08:30.082684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.274 [2024-07-24 20:08:30.082900] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.274 [2024-07-24 20:08:30.082909] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.274 [2024-07-24 20:08:30.082917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.274 [2024-07-24 20:08:30.086417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.274 EAL: No free 2048 kB hugepages reported on node 1 00:28:42.274 [2024-07-24 20:08:30.095693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.274 [2024-07-24 20:08:30.096439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-07-24 20:08:30.096478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.274 [2024-07-24 20:08:30.096494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.274 [2024-07-24 20:08:30.096731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.274 [2024-07-24 20:08:30.096951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.274 [2024-07-24 20:08:30.096960] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.274 [2024-07-24 20:08:30.096968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.274 [2024-07-24 20:08:30.100470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.274 [2024-07-24 20:08:30.109532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.274 [2024-07-24 20:08:30.110288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-07-24 20:08:30.110326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.274 [2024-07-24 20:08:30.110338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.274 [2024-07-24 20:08:30.110576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.274 [2024-07-24 20:08:30.110795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.274 [2024-07-24 20:08:30.110804] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.274 [2024-07-24 20:08:30.110812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.274 [2024-07-24 20:08:30.114315] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.274 [2024-07-24 20:08:30.123494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.274 [2024-07-24 20:08:30.124169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-07-24 20:08:30.124189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.274 [2024-07-24 20:08:30.124197] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.274 [2024-07-24 20:08:30.124422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.274 [2024-07-24 20:08:30.124639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.274 [2024-07-24 20:08:30.124648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.274 [2024-07-24 20:08:30.124655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.274 [2024-07-24 20:08:30.128142] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.274 [2024-07-24 20:08:30.137413] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.274 [2024-07-24 20:08:30.138147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-07-24 20:08:30.138184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.274 [2024-07-24 20:08:30.138195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.274 [2024-07-24 20:08:30.138437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.274 [2024-07-24 20:08:30.138658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.274 [2024-07-24 20:08:30.138671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.274 [2024-07-24 20:08:30.138680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.274 [2024-07-24 20:08:30.142073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:42.274 [2024-07-24 20:08:30.142178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.274 [2024-07-24 20:08:30.151265] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.274 [2024-07-24 20:08:30.152001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-07-24 20:08:30.152021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.274 [2024-07-24 20:08:30.152029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.274 [2024-07-24 20:08:30.152256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.274 [2024-07-24 20:08:30.152474] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.274 [2024-07-24 20:08:30.152483] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.274 [2024-07-24 20:08:30.152491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.274 [2024-07-24 20:08:30.155983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.274 [2024-07-24 20:08:30.165055] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.274 [2024-07-24 20:08:30.165855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-07-24 20:08:30.165894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.274 [2024-07-24 20:08:30.165904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.274 [2024-07-24 20:08:30.166142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.274 [2024-07-24 20:08:30.166369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.274 [2024-07-24 20:08:30.166379] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.274 [2024-07-24 20:08:30.166387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.274 [2024-07-24 20:08:30.169889] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.274 [2024-07-24 20:08:30.178970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.274 [2024-07-24 20:08:30.179740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-07-24 20:08:30.179779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.274 [2024-07-24 20:08:30.179790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.274 [2024-07-24 20:08:30.180027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.274 [2024-07-24 20:08:30.180256] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.274 [2024-07-24 20:08:30.180266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.274 [2024-07-24 20:08:30.180274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.274 [2024-07-24 20:08:30.183774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.274 [2024-07-24 20:08:30.192851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.274 [2024-07-24 20:08:30.193507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-07-24 20:08:30.193527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.274 [2024-07-24 20:08:30.193535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.274 [2024-07-24 20:08:30.193752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.274 [2024-07-24 20:08:30.193969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.274 [2024-07-24 20:08:30.193978] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.274 [2024-07-24 20:08:30.193985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.274 [2024-07-24 20:08:30.195256] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:42.274 [2024-07-24 20:08:30.195280] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:42.274 [2024-07-24 20:08:30.195286] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:42.274 [2024-07-24 20:08:30.195292] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:42.274 [2024-07-24 20:08:30.195297] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:42.274 [2024-07-24 20:08:30.195434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:42.274 [2024-07-24 20:08:30.195594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:42.274 [2024-07-24 20:08:30.195596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:42.274 [2024-07-24 20:08:30.197483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.274 [2024-07-24 20:08:30.206970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.274 [2024-07-24 20:08:30.207740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-07-24 20:08:30.207780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.274 [2024-07-24 20:08:30.207791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.274 [2024-07-24 20:08:30.208029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.274 [2024-07-24 20:08:30.208258] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.274 [2024-07-24 20:08:30.208268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.274 [2024-07-24 20:08:30.208276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.274 [2024-07-24 20:08:30.211774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.274 [2024-07-24 20:08:30.220840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.274 [2024-07-24 20:08:30.221558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-07-24 20:08:30.221578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.274 [2024-07-24 20:08:30.221586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.274 [2024-07-24 20:08:30.221803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.274 [2024-07-24 20:08:30.222026] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.275 [2024-07-24 20:08:30.222034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.275 [2024-07-24 20:08:30.222042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.537 [2024-07-24 20:08:30.225540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.537 [2024-07-24 20:08:30.234603] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.537 [2024-07-24 20:08:30.235498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.537 [2024-07-24 20:08:30.235540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.537 [2024-07-24 20:08:30.235551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.537 [2024-07-24 20:08:30.235789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.537 [2024-07-24 20:08:30.236009] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.537 [2024-07-24 20:08:30.236019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.537 [2024-07-24 20:08:30.236027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.537 [2024-07-24 20:08:30.239530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.537 [2024-07-24 20:08:30.248405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.537 [2024-07-24 20:08:30.249111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.537 [2024-07-24 20:08:30.249136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.537 [2024-07-24 20:08:30.249144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.537 [2024-07-24 20:08:30.249366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.537 [2024-07-24 20:08:30.249583] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.537 [2024-07-24 20:08:30.249593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.537 [2024-07-24 20:08:30.249600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.537 [2024-07-24 20:08:30.253089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.537 [2024-07-24 20:08:30.262155] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.537 [2024-07-24 20:08:30.262849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.537 [2024-07-24 20:08:30.262866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.537 [2024-07-24 20:08:30.262874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.537 [2024-07-24 20:08:30.263089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.537 [2024-07-24 20:08:30.263311] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.537 [2024-07-24 20:08:30.263320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.537 [2024-07-24 20:08:30.263327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.537 [2024-07-24 20:08:30.266829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.537 [2024-07-24 20:08:30.275893] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.537 [2024-07-24 20:08:30.276537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.537 [2024-07-24 20:08:30.276553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.537 [2024-07-24 20:08:30.276561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.537 [2024-07-24 20:08:30.276777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.537 [2024-07-24 20:08:30.276993] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.537 [2024-07-24 20:08:30.277002] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.537 [2024-07-24 20:08:30.277009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.537 [2024-07-24 20:08:30.280504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.537 [2024-07-24 20:08:30.289767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.537 [2024-07-24 20:08:30.290452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.537 [2024-07-24 20:08:30.290490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.537 [2024-07-24 20:08:30.290501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.537 [2024-07-24 20:08:30.290737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.537 [2024-07-24 20:08:30.290957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.537 [2024-07-24 20:08:30.290967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.537 [2024-07-24 20:08:30.290975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.537 [2024-07-24 20:08:30.294479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.537 [2024-07-24 20:08:30.303547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.537 [2024-07-24 20:08:30.304304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.537 [2024-07-24 20:08:30.304343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.537 [2024-07-24 20:08:30.304355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.537 [2024-07-24 20:08:30.304593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.537 [2024-07-24 20:08:30.304813] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.537 [2024-07-24 20:08:30.304823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.537 [2024-07-24 20:08:30.304830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.537 [2024-07-24 20:08:30.308336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.537 [2024-07-24 20:08:30.317404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.537 [2024-07-24 20:08:30.318014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.537 [2024-07-24 20:08:30.318052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.537 [2024-07-24 20:08:30.318068] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.537 [2024-07-24 20:08:30.318313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.537 [2024-07-24 20:08:30.318534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.537 [2024-07-24 20:08:30.318544] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.537 [2024-07-24 20:08:30.318551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.537 [2024-07-24 20:08:30.322049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.537 [2024-07-24 20:08:30.331325] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.537 [2024-07-24 20:08:30.332113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.537 [2024-07-24 20:08:30.332151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.537 [2024-07-24 20:08:30.332163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.537 [2024-07-24 20:08:30.332410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.537 [2024-07-24 20:08:30.332631] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.537 [2024-07-24 20:08:30.332640] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.537 [2024-07-24 20:08:30.332647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.537 [2024-07-24 20:08:30.336142] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.537 [2024-07-24 20:08:30.345231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.537 [2024-07-24 20:08:30.345907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.537 [2024-07-24 20:08:30.345926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.537 [2024-07-24 20:08:30.345934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.537 [2024-07-24 20:08:30.346151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.537 [2024-07-24 20:08:30.346374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.537 [2024-07-24 20:08:30.346384] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.537 [2024-07-24 20:08:30.346391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.537 [2024-07-24 20:08:30.349884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.537 [2024-07-24 20:08:30.359157] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.537 [2024-07-24 20:08:30.359956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.537 [2024-07-24 20:08:30.359995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.537 [2024-07-24 20:08:30.360005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.537 [2024-07-24 20:08:30.360253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.538 [2024-07-24 20:08:30.360474] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.538 [2024-07-24 20:08:30.360492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.538 [2024-07-24 20:08:30.360499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.538 [2024-07-24 20:08:30.363997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.538 [2024-07-24 20:08:30.373077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.538 [2024-07-24 20:08:30.373886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.538 [2024-07-24 20:08:30.373924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.538 [2024-07-24 20:08:30.373935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.538 [2024-07-24 20:08:30.374171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.538 [2024-07-24 20:08:30.374400] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.538 [2024-07-24 20:08:30.374410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.538 [2024-07-24 20:08:30.374418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.538 [2024-07-24 20:08:30.377915] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.538 [2024-07-24 20:08:30.386986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.538 [2024-07-24 20:08:30.387770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.538 [2024-07-24 20:08:30.387808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.538 [2024-07-24 20:08:30.387819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.538 [2024-07-24 20:08:30.388055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.538 [2024-07-24 20:08:30.388284] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.538 [2024-07-24 20:08:30.388294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.538 [2024-07-24 20:08:30.388302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.538 [2024-07-24 20:08:30.391799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.538 [2024-07-24 20:08:30.400868] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.538 [2024-07-24 20:08:30.401448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.538 [2024-07-24 20:08:30.401486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.538 [2024-07-24 20:08:30.401497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.538 [2024-07-24 20:08:30.401733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.538 [2024-07-24 20:08:30.401954] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.538 [2024-07-24 20:08:30.401963] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.538 [2024-07-24 20:08:30.401970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.538 [2024-07-24 20:08:30.405474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.538 [2024-07-24 20:08:30.414744] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.538 [2024-07-24 20:08:30.415352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.538 [2024-07-24 20:08:30.415390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.538 [2024-07-24 20:08:30.415402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.538 [2024-07-24 20:08:30.415640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.538 [2024-07-24 20:08:30.415860] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.538 [2024-07-24 20:08:30.415869] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.538 [2024-07-24 20:08:30.415877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.538 [2024-07-24 20:08:30.419381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.538 [2024-07-24 20:08:30.428651] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.538 [2024-07-24 20:08:30.429468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.538 [2024-07-24 20:08:30.429506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.538 [2024-07-24 20:08:30.429517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.538 [2024-07-24 20:08:30.429753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.538 [2024-07-24 20:08:30.429973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.538 [2024-07-24 20:08:30.429983] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.538 [2024-07-24 20:08:30.429990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.538 [2024-07-24 20:08:30.433494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.538 [2024-07-24 20:08:30.442568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.538 [2024-07-24 20:08:30.443455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.538 [2024-07-24 20:08:30.443492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.538 [2024-07-24 20:08:30.443503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.538 [2024-07-24 20:08:30.443739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.538 [2024-07-24 20:08:30.443959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.538 [2024-07-24 20:08:30.443969] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.538 [2024-07-24 20:08:30.443977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.538 [2024-07-24 20:08:30.447478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.538 [2024-07-24 20:08:30.456337] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.538 [2024-07-24 20:08:30.457035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.538 [2024-07-24 20:08:30.457053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.538 [2024-07-24 20:08:30.457061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.538 [2024-07-24 20:08:30.457288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.538 [2024-07-24 20:08:30.457505] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.538 [2024-07-24 20:08:30.457514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.538 [2024-07-24 20:08:30.457522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.538 [2024-07-24 20:08:30.461013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.538 [2024-07-24 20:08:30.470080] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.538 [2024-07-24 20:08:30.470785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.538 [2024-07-24 20:08:30.470803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.538 [2024-07-24 20:08:30.470810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.538 [2024-07-24 20:08:30.471027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.538 [2024-07-24 20:08:30.471249] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.538 [2024-07-24 20:08:30.471259] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.538 [2024-07-24 20:08:30.471266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.538 [2024-07-24 20:08:30.474753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.538 [2024-07-24 20:08:30.483817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.538 [2024-07-24 20:08:30.484570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.538 [2024-07-24 20:08:30.484609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.538 [2024-07-24 20:08:30.484619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.538 [2024-07-24 20:08:30.484855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.538 [2024-07-24 20:08:30.485076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.538 [2024-07-24 20:08:30.485085] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.538 [2024-07-24 20:08:30.485093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.538 [2024-07-24 20:08:30.488600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.804 [2024-07-24 20:08:30.497661] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.804 [2024-07-24 20:08:30.498476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.804 [2024-07-24 20:08:30.498514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.804 [2024-07-24 20:08:30.498525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.804 [2024-07-24 20:08:30.498761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.804 [2024-07-24 20:08:30.498982] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.804 [2024-07-24 20:08:30.498991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.804 [2024-07-24 20:08:30.499004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.804 [2024-07-24 20:08:30.502503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.804 [2024-07-24 20:08:30.511562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.804 [2024-07-24 20:08:30.512013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.804 [2024-07-24 20:08:30.512032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.804 [2024-07-24 20:08:30.512039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.804 [2024-07-24 20:08:30.512261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.804 [2024-07-24 20:08:30.512478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.804 [2024-07-24 20:08:30.512488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.804 [2024-07-24 20:08:30.512494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.804 [2024-07-24 20:08:30.515984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.804 [2024-07-24 20:08:30.525449] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.804 [2024-07-24 20:08:30.526101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.804 [2024-07-24 20:08:30.526118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.804 [2024-07-24 20:08:30.526125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.804 [2024-07-24 20:08:30.526345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.804 [2024-07-24 20:08:30.526561] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.804 [2024-07-24 20:08:30.526570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.804 [2024-07-24 20:08:30.526578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.804 [2024-07-24 20:08:30.530068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.804 [2024-07-24 20:08:30.539325] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.804 [2024-07-24 20:08:30.540093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.804 [2024-07-24 20:08:30.540131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.804 [2024-07-24 20:08:30.540143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.804 [2024-07-24 20:08:30.540389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.804 [2024-07-24 20:08:30.540609] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.804 [2024-07-24 20:08:30.540620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.804 [2024-07-24 20:08:30.540627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.804 [2024-07-24 20:08:30.544133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.804 [2024-07-24 20:08:30.553213] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.804 [2024-07-24 20:08:30.554027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.804 [2024-07-24 20:08:30.554065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.804 [2024-07-24 20:08:30.554076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.804 [2024-07-24 20:08:30.554319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.804 [2024-07-24 20:08:30.554540] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.804 [2024-07-24 20:08:30.554550] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.804 [2024-07-24 20:08:30.554557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.804 [2024-07-24 20:08:30.558052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.804 [2024-07-24 20:08:30.567119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.804 [2024-07-24 20:08:30.567913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.804 [2024-07-24 20:08:30.567951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.804 [2024-07-24 20:08:30.567963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.804 [2024-07-24 20:08:30.568199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.804 [2024-07-24 20:08:30.568430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.804 [2024-07-24 20:08:30.568440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.804 [2024-07-24 20:08:30.568448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.804 [2024-07-24 20:08:30.571945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.804 [2024-07-24 20:08:30.581010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.804 [2024-07-24 20:08:30.581470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.804 [2024-07-24 20:08:30.581489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.804 [2024-07-24 20:08:30.581497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.804 [2024-07-24 20:08:30.581713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.804 [2024-07-24 20:08:30.581930] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.804 [2024-07-24 20:08:30.581939] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.804 [2024-07-24 20:08:30.581946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.804 [2024-07-24 20:08:30.585441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.804 [2024-07-24 20:08:30.594906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.804 [2024-07-24 20:08:30.595689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.804 [2024-07-24 20:08:30.595728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.805 [2024-07-24 20:08:30.595740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.805 [2024-07-24 20:08:30.595977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.805 [2024-07-24 20:08:30.596210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.805 [2024-07-24 20:08:30.596220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.805 [2024-07-24 20:08:30.596228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.805 [2024-07-24 20:08:30.599725] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.805 [2024-07-24 20:08:30.608783] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.805 [2024-07-24 20:08:30.609322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.805 [2024-07-24 20:08:30.609359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.805 [2024-07-24 20:08:30.609371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.805 [2024-07-24 20:08:30.609609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.805 [2024-07-24 20:08:30.609829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.805 [2024-07-24 20:08:30.609838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.805 [2024-07-24 20:08:30.609845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.805 [2024-07-24 20:08:30.613349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.805 [2024-07-24 20:08:30.622617] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.805 [2024-07-24 20:08:30.623330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.805 [2024-07-24 20:08:30.623381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.805 [2024-07-24 20:08:30.623392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.805 [2024-07-24 20:08:30.623628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.805 [2024-07-24 20:08:30.623847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.805 [2024-07-24 20:08:30.623857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.805 [2024-07-24 20:08:30.623865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.805 [2024-07-24 20:08:30.627368] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.805 [2024-07-24 20:08:30.636429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.805 [2024-07-24 20:08:30.637129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.805 [2024-07-24 20:08:30.637147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.805 [2024-07-24 20:08:30.637155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.805 [2024-07-24 20:08:30.637376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.805 [2024-07-24 20:08:30.637593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.805 [2024-07-24 20:08:30.637603] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.805 [2024-07-24 20:08:30.637610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.805 [2024-07-24 20:08:30.641099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.805 [2024-07-24 20:08:30.650166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.805 [2024-07-24 20:08:30.650684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.805 [2024-07-24 20:08:30.650700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.805 [2024-07-24 20:08:30.650708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.805 [2024-07-24 20:08:30.650924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.805 [2024-07-24 20:08:30.651140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.805 [2024-07-24 20:08:30.651148] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.805 [2024-07-24 20:08:30.651155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.805 [2024-07-24 20:08:30.654644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.805 [2024-07-24 20:08:30.663905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.805 [2024-07-24 20:08:30.664702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.805 [2024-07-24 20:08:30.664741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.805 [2024-07-24 20:08:30.664751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.805 [2024-07-24 20:08:30.664987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.805 [2024-07-24 20:08:30.665215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.805 [2024-07-24 20:08:30.665225] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.805 [2024-07-24 20:08:30.665233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.805 [2024-07-24 20:08:30.668738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.805 [2024-07-24 20:08:30.677838] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.805 [2024-07-24 20:08:30.678611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.805 [2024-07-24 20:08:30.678650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.805 [2024-07-24 20:08:30.678661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.805 [2024-07-24 20:08:30.678896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.805 [2024-07-24 20:08:30.679116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.805 [2024-07-24 20:08:30.679126] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.805 [2024-07-24 20:08:30.679134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.805 [2024-07-24 20:08:30.682640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.805 [2024-07-24 20:08:30.691706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.805 [2024-07-24 20:08:30.692435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.805 [2024-07-24 20:08:30.692474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.805 [2024-07-24 20:08:30.692491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.805 [2024-07-24 20:08:30.692730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.805 [2024-07-24 20:08:30.692951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.805 [2024-07-24 20:08:30.692961] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.805 [2024-07-24 20:08:30.692968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.805 [2024-07-24 20:08:30.696473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.805 [2024-07-24 20:08:30.705537] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.805 [2024-07-24 20:08:30.706306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.805 [2024-07-24 20:08:30.706345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.805 [2024-07-24 20:08:30.706357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.805 [2024-07-24 20:08:30.706595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.805 [2024-07-24 20:08:30.706815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.805 [2024-07-24 20:08:30.706824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.805 [2024-07-24 20:08:30.706831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.805 [2024-07-24 20:08:30.710333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.805 [2024-07-24 20:08:30.719395] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.805 [2024-07-24 20:08:30.720194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.805 [2024-07-24 20:08:30.720239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.805 [2024-07-24 20:08:30.720252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.805 [2024-07-24 20:08:30.720491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.805 [2024-07-24 20:08:30.720712] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.805 [2024-07-24 20:08:30.720721] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.805 [2024-07-24 20:08:30.720729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.805 [2024-07-24 20:08:30.724227] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.805 [2024-07-24 20:08:30.733292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.805 [2024-07-24 20:08:30.734051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.805 [2024-07-24 20:08:30.734089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.805 [2024-07-24 20:08:30.734101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.806 [2024-07-24 20:08:30.734346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.806 [2024-07-24 20:08:30.734572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.806 [2024-07-24 20:08:30.734581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.806 [2024-07-24 20:08:30.734589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.806 [2024-07-24 20:08:30.738084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.806 [2024-07-24 20:08:30.747161] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.806 [2024-07-24 20:08:30.747915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.806 [2024-07-24 20:08:30.747953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:42.806 [2024-07-24 20:08:30.747965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:42.806 [2024-07-24 20:08:30.748211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:42.806 [2024-07-24 20:08:30.748432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.806 [2024-07-24 20:08:30.748442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.806 [2024-07-24 20:08:30.748450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.806 [2024-07-24 20:08:30.751943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.128 [2024-07-24 20:08:30.761009] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.128 [2024-07-24 20:08:30.761715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.128 [2024-07-24 20:08:30.761735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:43.128 [2024-07-24 20:08:30.761743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:43.128 [2024-07-24 20:08:30.761960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:43.128 [2024-07-24 20:08:30.762177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.128 [2024-07-24 20:08:30.762187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.128 [2024-07-24 20:08:30.762195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.128 [2024-07-24 20:08:30.765699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.128 [2024-07-24 20:08:30.774764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.128 [2024-07-24 20:08:30.775577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.128 [2024-07-24 20:08:30.775615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:43.128 [2024-07-24 20:08:30.775626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:43.128 [2024-07-24 20:08:30.775862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:43.128 [2024-07-24 20:08:30.776082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.128 [2024-07-24 20:08:30.776092] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.128 [2024-07-24 20:08:30.776100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.128 [2024-07-24 20:08:30.779602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.128 [2024-07-24 20:08:30.788677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.128 [2024-07-24 20:08:30.789461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.128 [2024-07-24 20:08:30.789500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:43.128 [2024-07-24 20:08:30.789511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:43.128 [2024-07-24 20:08:30.789747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:43.128 [2024-07-24 20:08:30.789967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.128 [2024-07-24 20:08:30.789977] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.128 [2024-07-24 20:08:30.789985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.128 [2024-07-24 20:08:30.793492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.128 [2024-07-24 20:08:30.802629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.128 [2024-07-24 20:08:30.803351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.128 [2024-07-24 20:08:30.803390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:43.128 [2024-07-24 20:08:30.803402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:43.128 [2024-07-24 20:08:30.803641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:43.128 [2024-07-24 20:08:30.803861] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.128 [2024-07-24 20:08:30.803872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.128 [2024-07-24 20:08:30.803879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.128 [2024-07-24 20:08:30.807376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.128 [2024-07-24 20:08:30.816438] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.128 [2024-07-24 20:08:30.817097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.128 [2024-07-24 20:08:30.817117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:43.128 [2024-07-24 20:08:30.817125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:43.128 [2024-07-24 20:08:30.817347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:43.128 [2024-07-24 20:08:30.817564] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.128 [2024-07-24 20:08:30.817573] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.128 [2024-07-24 20:08:30.817580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.128 [2024-07-24 20:08:30.821068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.128 20:08:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:43.128 20:08:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:28:43.128 20:08:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:43.128 [2024-07-24 20:08:30.830332] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.128 20:08:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:43.128 20:08:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:43.128 [2024-07-24 20:08:30.830966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.128 [2024-07-24 20:08:30.831004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:43.128 [2024-07-24 20:08:30.831014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:43.128 [2024-07-24 20:08:30.831257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:43.128 [2024-07-24 20:08:30.831486] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.128 [2024-07-24 20:08:30.831495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.128 [2024-07-24 20:08:30.831503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.128 [2024-07-24 20:08:30.834998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.128 [2024-07-24 20:08:30.844280] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.128 [2024-07-24 20:08:30.844945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.128 [2024-07-24 20:08:30.844965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:43.128 [2024-07-24 20:08:30.844973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:43.128 [2024-07-24 20:08:30.845189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:43.128 [2024-07-24 20:08:30.845412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.128 [2024-07-24 20:08:30.845421] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.128 [2024-07-24 20:08:30.845429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.128 [2024-07-24 20:08:30.848924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.128 [2024-07-24 20:08:30.858209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.128 [2024-07-24 20:08:30.858997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.128 [2024-07-24 20:08:30.859035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:43.128 [2024-07-24 20:08:30.859048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:43.128 [2024-07-24 20:08:30.859293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:43.128 [2024-07-24 20:08:30.859514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.128 [2024-07-24 20:08:30.859524] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.128 [2024-07-24 20:08:30.859532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.128 [2024-07-24 20:08:30.863025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.128 20:08:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:43.128 20:08:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:43.128 20:08:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.128 20:08:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:43.128 [2024-07-24 20:08:30.872100] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.128 [2024-07-24 20:08:30.872878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.128 [2024-07-24 20:08:30.872917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:43.128 [2024-07-24 20:08:30.872928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:43.128 [2024-07-24 20:08:30.873163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:43.128 [2024-07-24 20:08:30.873391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.129 [2024-07-24 20:08:30.873402] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.129 [2024-07-24 20:08:30.873409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.129 [2024-07-24 20:08:30.876906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.129 [2024-07-24 20:08:30.878049] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:43.129 [2024-07-24 20:08:30.885970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.129 [2024-07-24 20:08:30.886631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.129 [2024-07-24 20:08:30.886650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:43.129 [2024-07-24 20:08:30.886658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:43.129 [2024-07-24 20:08:30.886874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:43.129 [2024-07-24 20:08:30.887091] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.129 [2024-07-24 20:08:30.887100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.129 [2024-07-24 20:08:30.887107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.129 [2024-07-24 20:08:30.890602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.129 20:08:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.129 20:08:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:43.129 20:08:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.129 20:08:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:43.129 [2024-07-24 20:08:30.899861] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.129 [2024-07-24 20:08:30.900636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.129 [2024-07-24 20:08:30.900674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:43.129 [2024-07-24 20:08:30.900685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:43.129 [2024-07-24 20:08:30.900921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:43.129 [2024-07-24 20:08:30.901142] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.129 [2024-07-24 20:08:30.901151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.129 [2024-07-24 20:08:30.901158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.129 [2024-07-24 20:08:30.904664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.129 Malloc0 00:28:43.129 20:08:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.129 20:08:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:43.129 20:08:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.129 20:08:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:43.129 [2024-07-24 20:08:30.913725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.129 [2024-07-24 20:08:30.914524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.129 [2024-07-24 20:08:30.914563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:43.129 [2024-07-24 20:08:30.914574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:43.129 [2024-07-24 20:08:30.914814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:43.129 [2024-07-24 20:08:30.915033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.129 [2024-07-24 20:08:30.915044] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.129 [2024-07-24 20:08:30.915051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.129 [2024-07-24 20:08:30.918554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.129 20:08:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.129 20:08:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:43.129 20:08:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.129 20:08:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:43.129 [2024-07-24 20:08:30.927617] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.129 [2024-07-24 20:08:30.928444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.129 [2024-07-24 20:08:30.928482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:43.129 [2024-07-24 20:08:30.928494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:43.129 [2024-07-24 20:08:30.928729] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:43.129 [2024-07-24 20:08:30.928949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.129 [2024-07-24 20:08:30.928959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.129 [2024-07-24 20:08:30.928966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.129 [2024-07-24 20:08:30.932470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.129 20:08:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.129 20:08:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:43.129 20:08:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.129 20:08:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:43.129 [2024-07-24 20:08:30.941537] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.129 [2024-07-24 20:08:30.942304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.129 [2024-07-24 20:08:30.942345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a583b0 with addr=10.0.0.2, port=4420 00:28:43.129 [2024-07-24 20:08:30.942357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a583b0 is same with the state(5) to be set 00:28:43.129 [2024-07-24 20:08:30.942595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a583b0 (9): Bad file descriptor 00:28:43.129 [2024-07-24 20:08:30.942814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.129 [2024-07-24 20:08:30.942823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.129 [2024-07-24 20:08:30.942831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.129 [2024-07-24 20:08:30.943136] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:43.129 [2024-07-24 20:08:30.946345] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:43.129 20:08:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.129 20:08:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3854405 00:28:43.129 [2024-07-24 20:08:30.955412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.129 [2024-07-24 20:08:31.028954] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:53.127 00:28:53.127 Latency(us) 00:28:53.127 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:53.127 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:53.127 Verification LBA range: start 0x0 length 0x4000 00:28:53.127 Nvme1n1 : 15.01 8570.63 33.48 9897.53 0.00 6904.75 1317.55 16820.91 00:28:53.127 =================================================================================================================== 00:28:53.127 Total : 8570.63 33.48 9897.53 0.00 6904.75 1317.55 16820.91 00:28:53.127 20:08:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:53.127 20:08:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:53.127 20:08:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.127 20:08:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:53.127 20:08:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.127 20:08:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:53.127 20:08:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:53.127 20:08:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:53.127 20:08:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:28:53.127 20:08:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:53.127 20:08:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:28:53.127 20:08:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:53.127 20:08:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:53.127 rmmod nvme_tcp 00:28:53.127 rmmod nvme_fabrics 00:28:53.127 rmmod nvme_keyring 00:28:53.127 20:08:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:53.127 20:08:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:28:53.127 20:08:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:28:53.127 20:08:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3855519 ']' 00:28:53.127 20:08:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3855519 00:28:53.127 20:08:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 3855519 ']' 00:28:53.127 20:08:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 3855519 00:28:53.127 20:08:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:28:53.127 20:08:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:53.127 20:08:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3855519 00:28:53.127 20:08:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:53.127 20:08:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:53.127 20:08:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3855519' 00:28:53.127 killing process with pid 3855519 00:28:53.127 20:08:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 3855519 00:28:53.127 20:08:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 3855519 00:28:53.127 20:08:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:53.127 20:08:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:53.127 20:08:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:53.127 20:08:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:53.127 20:08:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:53.128 20:08:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:53.128 20:08:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:53.128 20:08:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:54.069 20:08:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:54.069 00:28:54.069 real 0m27.252s 00:28:54.069 user 1m2.413s 00:28:54.069 sys 0m6.788s 00:28:54.069 20:08:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:54.069 20:08:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:54.069 ************************************ 00:28:54.069 END TEST nvmf_bdevperf 00:28:54.069 ************************************ 00:28:54.069 20:08:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:54.069 20:08:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:54.069 20:08:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:54.069 20:08:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.069 ************************************ 00:28:54.069 START TEST nvmf_target_disconnect 00:28:54.069 ************************************ 00:28:54.069 20:08:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:54.331 * Looking for test storage... 00:28:54.331 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:54.331 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:54.331 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:28:54.331 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:54.331 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:54.331 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:54.331 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:54.331 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:54.331 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:54.331 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:54.331 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:54.331 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:54.331 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:54.331 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:54.331 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:54.331 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:54.331 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:54.331 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:54.331 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:54.331 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:54.331 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:54.331 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:54.331 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:54.331 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.331 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.331 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.331 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:28:54.332 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.332 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:28:54.332 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:54.332 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:54.332 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:54.332 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:54.332 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:54.332 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:54.332 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:54.332 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:54.332 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:54.332 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:54.332 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:54.332 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:28:54.332 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:54.332 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:54.332 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:54.332 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:54.332 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:54.332 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:54.332 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:54.332 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:54.332 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:54.332 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:54.332 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:28:54.332 20:08:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:02.473 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:02.473 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:02.473 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:02.473 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:02.473 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:02.474 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:02.474 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:02.474 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:02.474 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:02.474 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:02.474 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:02.474 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:02.474 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:02.474 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:02.474 20:08:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:02.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:02.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:29:02.474 00:29:02.474 --- 10.0.0.2 ping statistics --- 00:29:02.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:02.474 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:02.474 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:02.474 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.335 ms 00:29:02.474 00:29:02.474 --- 10.0.0.1 ping statistics --- 00:29:02.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:02.474 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:02.474 ************************************ 00:29:02.474 START TEST nvmf_target_disconnect_tc1 00:29:02.474 ************************************ 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:02.474 EAL: No free 2048 kB hugepages reported on node 1 00:29:02.474 [2024-07-24 20:08:49.443603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.474 [2024-07-24 20:08:49.443665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161ae20 with addr=10.0.0.2, port=4420 00:29:02.474 [2024-07-24 20:08:49.443696] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:02.474 [2024-07-24 20:08:49.443713] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:02.474 [2024-07-24 20:08:49.443721] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:02.474 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:02.474 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:02.474 Initializing NVMe Controllers 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:02.474 00:29:02.474 real 0m0.112s 00:29:02.474 user 0m0.045s 00:29:02.474 sys 0m0.066s 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:02.474 ************************************ 00:29:02.474 END TEST nvmf_target_disconnect_tc1 00:29:02.474 ************************************ 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:02.474 ************************************ 00:29:02.474 START TEST nvmf_target_disconnect_tc2 00:29:02.474 ************************************ 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3861560 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3861560 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3861560 ']' 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:02.474 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:02.475 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:02.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:02.475 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:02.475 20:08:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:02.475 [2024-07-24 20:08:49.600049] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:29:02.475 [2024-07-24 20:08:49.600109] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:02.475 EAL: No free 2048 kB hugepages reported on node 1 00:29:02.475 [2024-07-24 20:08:49.686514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:02.475 [2024-07-24 20:08:49.780993] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:02.475 [2024-07-24 20:08:49.781053] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:02.475 [2024-07-24 20:08:49.781062] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:02.475 [2024-07-24 20:08:49.781068] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:02.475 [2024-07-24 20:08:49.781074] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:02.475 [2024-07-24 20:08:49.781732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:02.475 [2024-07-24 20:08:49.781957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:02.475 [2024-07-24 20:08:49.782158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:02.475 [2024-07-24 20:08:49.782177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:02.475 20:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:02.475 20:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:02.475 20:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:02.475 20:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:02.475 20:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:02.736 20:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:02.736 20:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:02.736 20:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.736 20:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:02.736 Malloc0 00:29:02.736 20:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.736 20:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:02.736 20:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.736 20:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:02.736 [2024-07-24 20:08:50.468654] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:02.736 20:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.736 20:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:02.736 20:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.736 20:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:02.736 20:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.736 20:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:02.736 20:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.736 20:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:02.736 20:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.736 20:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:02.736 20:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.736 20:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:02.736 [2024-07-24 20:08:50.509043] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:02.736 20:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.736 20:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:02.736 20:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.736 20:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:02.736 20:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.736 20:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3861809 00:29:02.736 20:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:02.736 20:08:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:02.736 EAL: No free 2048 kB hugepages reported on node 1 00:29:04.651 20:08:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3861560 00:29:04.651 20:08:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:04.651 Read completed with error (sct=0, sc=8) 00:29:04.651 starting I/O failed 00:29:04.651 Read completed with error (sct=0, sc=8) 00:29:04.651 starting I/O failed 00:29:04.651 Read completed with error (sct=0, sc=8) 00:29:04.651 starting I/O failed 00:29:04.651 Read completed with error (sct=0, sc=8) 00:29:04.651 starting I/O failed 00:29:04.651 Read completed with error (sct=0, sc=8) 00:29:04.651 starting I/O failed 00:29:04.651 Read completed with error (sct=0, sc=8) 00:29:04.651 starting I/O failed 00:29:04.651 Read completed with error (sct=0, sc=8) 00:29:04.651 starting I/O failed 00:29:04.651 Read completed with error (sct=0, sc=8) 00:29:04.651 starting I/O failed 00:29:04.651 Read completed with error (sct=0, sc=8) 00:29:04.651 starting I/O failed 00:29:04.651 Read completed with error (sct=0, sc=8) 00:29:04.651 starting I/O failed 00:29:04.651 Read completed with error (sct=0, sc=8) 00:29:04.651 starting I/O failed 00:29:04.651 Write completed with error (sct=0, sc=8) 00:29:04.651 starting I/O failed 00:29:04.651 Write completed with error (sct=0, sc=8) 00:29:04.651 starting I/O failed 00:29:04.651 Write completed with error (sct=0, sc=8) 00:29:04.651 starting I/O failed 00:29:04.652 Write completed with error (sct=0, sc=8) 00:29:04.652 starting I/O failed 00:29:04.652 Write completed with error (sct=0, sc=8) 00:29:04.652 starting I/O failed 00:29:04.652 Write completed with error (sct=0, sc=8) 00:29:04.652 starting I/O failed 00:29:04.652 Write completed with error (sct=0, sc=8) 00:29:04.652 starting I/O failed 00:29:04.652 Read completed with error (sct=0, sc=8) 00:29:04.652 starting I/O failed 00:29:04.652 Write completed with error (sct=0, sc=8) 00:29:04.652 starting I/O failed 00:29:04.652 Read completed with error (sct=0, sc=8) 00:29:04.652 starting I/O failed 00:29:04.652 Write completed with error (sct=0, sc=8) 00:29:04.652 starting I/O failed 00:29:04.652 Read completed with error (sct=0, sc=8) 00:29:04.652 starting I/O failed 00:29:04.652 Read completed with error (sct=0, sc=8) 00:29:04.652 starting I/O failed 00:29:04.652 Write completed with error (sct=0, sc=8) 00:29:04.652 starting I/O failed 00:29:04.652 Read completed with error (sct=0, sc=8) 00:29:04.652 starting I/O failed 00:29:04.652 Write completed with error (sct=0, sc=8) 00:29:04.652 starting I/O failed 00:29:04.652 Write completed with error (sct=0, sc=8) 00:29:04.652 starting I/O failed 00:29:04.652 Read completed with error (sct=0, sc=8) 00:29:04.652 starting I/O failed 00:29:04.652 Read completed with error (sct=0, sc=8) 00:29:04.652 starting I/O failed 00:29:04.652 Write completed with error (sct=0, sc=8) 00:29:04.652 starting I/O failed 00:29:04.652 Write completed with error (sct=0, sc=8) 00:29:04.652 starting I/O failed 00:29:04.652 [2024-07-24 20:08:52.541936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.652 [2024-07-24 20:08:52.542427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.652 [2024-07-24 20:08:52.542456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.652 qpair failed and we were unable to recover it. 00:29:04.652 [2024-07-24 20:08:52.542882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.652 [2024-07-24 20:08:52.542891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.652 qpair failed and we were unable to recover it. 00:29:04.652 [2024-07-24 20:08:52.543427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.652 [2024-07-24 20:08:52.543455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.652 qpair failed and we were unable to recover it. 00:29:04.652 [2024-07-24 20:08:52.543918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.652 [2024-07-24 20:08:52.543927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.652 qpair failed and we were unable to recover it. 00:29:04.652 [2024-07-24 20:08:52.544495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.652 [2024-07-24 20:08:52.544522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.652 qpair failed and we were unable to recover it. 00:29:04.652 [2024-07-24 20:08:52.544984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.652 [2024-07-24 20:08:52.544994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.652 qpair failed and we were unable to recover it. 00:29:04.652 [2024-07-24 20:08:52.545544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.652 [2024-07-24 20:08:52.545575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.652 qpair failed and we were unable to recover it. 00:29:04.652 [2024-07-24 20:08:52.546076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.652 [2024-07-24 20:08:52.546084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.652 qpair failed and we were unable to recover it. 00:29:04.652 [2024-07-24 20:08:52.546472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.652 [2024-07-24 20:08:52.546499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.652 qpair failed and we were unable to recover it. 00:29:04.652 [2024-07-24 20:08:52.546946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.652 [2024-07-24 20:08:52.546956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.652 qpair failed and we were unable to recover it. 00:29:04.652 [2024-07-24 20:08:52.547522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.652 [2024-07-24 20:08:52.547550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.652 qpair failed and we were unable to recover it. 00:29:04.652 [2024-07-24 20:08:52.548009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.652 [2024-07-24 20:08:52.548018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.652 qpair failed and we were unable to recover it. 00:29:04.652 [2024-07-24 20:08:52.548559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.652 [2024-07-24 20:08:52.548586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.652 qpair failed and we were unable to recover it. 00:29:04.652 [2024-07-24 20:08:52.549076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.652 [2024-07-24 20:08:52.549086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.652 qpair failed and we were unable to recover it. 00:29:04.652 [2024-07-24 20:08:52.549537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.652 [2024-07-24 20:08:52.549545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.652 qpair failed and we were unable to recover it. 00:29:04.652 [2024-07-24 20:08:52.549987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.652 [2024-07-24 20:08:52.549994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.652 qpair failed and we were unable to recover it. 00:29:04.652 [2024-07-24 20:08:52.550530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.652 [2024-07-24 20:08:52.550559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.652 qpair failed and we were unable to recover it. 00:29:04.652 [2024-07-24 20:08:52.550896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.652 [2024-07-24 20:08:52.550905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.652 qpair failed and we were unable to recover it. 00:29:04.652 [2024-07-24 20:08:52.551467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.652 [2024-07-24 20:08:52.551495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.652 qpair failed and we were unable to recover it. 00:29:04.652 [2024-07-24 20:08:52.551946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.652 [2024-07-24 20:08:52.551955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.652 qpair failed and we were unable to recover it. 00:29:04.652 [2024-07-24 20:08:52.552514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.652 [2024-07-24 20:08:52.552542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.652 qpair failed and we were unable to recover it. 00:29:04.652 [2024-07-24 20:08:52.552948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.652 [2024-07-24 20:08:52.552957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.652 qpair failed and we were unable to recover it. 00:29:04.652 [2024-07-24 20:08:52.553390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.652 [2024-07-24 20:08:52.553418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.652 qpair failed and we were unable to recover it. 00:29:04.652 [2024-07-24 20:08:52.553871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.652 [2024-07-24 20:08:52.553881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.652 qpair failed and we were unable to recover it. 00:29:04.652 [2024-07-24 20:08:52.554288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.652 [2024-07-24 20:08:52.554296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.652 qpair failed and we were unable to recover it. 00:29:04.652 [2024-07-24 20:08:52.554710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.652 [2024-07-24 20:08:52.554718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.652 qpair failed and we were unable to recover it. 00:29:04.652 [2024-07-24 20:08:52.555177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.652 [2024-07-24 20:08:52.555184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.652 qpair failed and we were unable to recover it. 00:29:04.652 [2024-07-24 20:08:52.555526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.652 [2024-07-24 20:08:52.555534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.652 qpair failed and we were unable to recover it. 00:29:04.652 [2024-07-24 20:08:52.555984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.652 [2024-07-24 20:08:52.555991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.652 qpair failed and we were unable to recover it. 00:29:04.652 [2024-07-24 20:08:52.556618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.652 [2024-07-24 20:08:52.556645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.652 qpair failed and we were unable to recover it. 00:29:04.652 [2024-07-24 20:08:52.557064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.652 [2024-07-24 20:08:52.557072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.652 qpair failed and we were unable to recover it. 00:29:04.653 [2024-07-24 20:08:52.557468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-07-24 20:08:52.557497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.653 qpair failed and we were unable to recover it. 00:29:04.653 [2024-07-24 20:08:52.557947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-07-24 20:08:52.557955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.653 qpair failed and we were unable to recover it. 00:29:04.653 [2024-07-24 20:08:52.558487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-07-24 20:08:52.558514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.653 qpair failed and we were unable to recover it. 00:29:04.653 [2024-07-24 20:08:52.558819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-07-24 20:08:52.558828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.653 qpair failed and we were unable to recover it. 00:29:04.653 [2024-07-24 20:08:52.559245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-07-24 20:08:52.559252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.653 qpair failed and we were unable to recover it. 00:29:04.653 [2024-07-24 20:08:52.559577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-07-24 20:08:52.559585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.653 qpair failed and we were unable to recover it. 00:29:04.653 [2024-07-24 20:08:52.560041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-07-24 20:08:52.560047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.653 qpair failed and we were unable to recover it. 00:29:04.653 [2024-07-24 20:08:52.560439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-07-24 20:08:52.560446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.653 qpair failed and we were unable to recover it. 00:29:04.653 [2024-07-24 20:08:52.560891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-07-24 20:08:52.560898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.653 qpair failed and we were unable to recover it. 00:29:04.653 [2024-07-24 20:08:52.561321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-07-24 20:08:52.561327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.653 qpair failed and we were unable to recover it. 00:29:04.653 [2024-07-24 20:08:52.561726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-07-24 20:08:52.561733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.653 qpair failed and we were unable to recover it. 00:29:04.653 [2024-07-24 20:08:52.562066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-07-24 20:08:52.562073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.653 qpair failed and we were unable to recover it. 00:29:04.653 [2024-07-24 20:08:52.562399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-07-24 20:08:52.562406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.653 qpair failed and we were unable to recover it. 00:29:04.653 [2024-07-24 20:08:52.562807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-07-24 20:08:52.562813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.653 qpair failed and we were unable to recover it. 00:29:04.653 [2024-07-24 20:08:52.563221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-07-24 20:08:52.563228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.653 qpair failed and we were unable to recover it. 00:29:04.653 [2024-07-24 20:08:52.563428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-07-24 20:08:52.563443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.653 qpair failed and we were unable to recover it. 00:29:04.653 [2024-07-24 20:08:52.563894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-07-24 20:08:52.563901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.653 qpair failed and we were unable to recover it. 00:29:04.653 [2024-07-24 20:08:52.564313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-07-24 20:08:52.564321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.653 qpair failed and we were unable to recover it. 00:29:04.653 [2024-07-24 20:08:52.564646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-07-24 20:08:52.564652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.653 qpair failed and we were unable to recover it. 00:29:04.653 [2024-07-24 20:08:52.565072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-07-24 20:08:52.565079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.653 qpair failed and we were unable to recover it. 00:29:04.653 [2024-07-24 20:08:52.565614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-07-24 20:08:52.565621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.653 qpair failed and we were unable to recover it. 00:29:04.653 [2024-07-24 20:08:52.566024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-07-24 20:08:52.566032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.653 qpair failed and we were unable to recover it. 00:29:04.653 [2024-07-24 20:08:52.566579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-07-24 20:08:52.566607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.653 qpair failed and we were unable to recover it. 00:29:04.653 [2024-07-24 20:08:52.567063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-07-24 20:08:52.567071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.653 qpair failed and we were unable to recover it. 00:29:04.653 [2024-07-24 20:08:52.567571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-07-24 20:08:52.567598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.653 qpair failed and we were unable to recover it. 00:29:04.653 [2024-07-24 20:08:52.568016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-07-24 20:08:52.568024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.653 qpair failed and we were unable to recover it. 00:29:04.653 [2024-07-24 20:08:52.568561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-07-24 20:08:52.568589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.653 qpair failed and we were unable to recover it. 00:29:04.653 [2024-07-24 20:08:52.568883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-07-24 20:08:52.568891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.653 qpair failed and we were unable to recover it. 00:29:04.653 [2024-07-24 20:08:52.569451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-07-24 20:08:52.569478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.653 qpair failed and we were unable to recover it. 00:29:04.653 [2024-07-24 20:08:52.569913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-07-24 20:08:52.569922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.653 qpair failed and we were unable to recover it. 00:29:04.653 [2024-07-24 20:08:52.570498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-07-24 20:08:52.570525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.653 qpair failed and we were unable to recover it. 00:29:04.653 [2024-07-24 20:08:52.570853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-07-24 20:08:52.570861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.653 qpair failed and we were unable to recover it. 00:29:04.653 [2024-07-24 20:08:52.571307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-07-24 20:08:52.571314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.653 qpair failed and we were unable to recover it. 00:29:04.653 [2024-07-24 20:08:52.571771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-07-24 20:08:52.571778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.653 qpair failed and we were unable to recover it. 00:29:04.653 [2024-07-24 20:08:52.572219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-07-24 20:08:52.572226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.653 qpair failed and we were unable to recover it. 00:29:04.653 [2024-07-24 20:08:52.572437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-07-24 20:08:52.572444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.653 qpair failed and we were unable to recover it. 00:29:04.653 [2024-07-24 20:08:52.572867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-07-24 20:08:52.572873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.653 qpair failed and we were unable to recover it. 00:29:04.653 [2024-07-24 20:08:52.573174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.653 [2024-07-24 20:08:52.573181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.654 qpair failed and we were unable to recover it. 00:29:04.654 [2024-07-24 20:08:52.573624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.654 [2024-07-24 20:08:52.573631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.654 qpair failed and we were unable to recover it. 00:29:04.654 [2024-07-24 20:08:52.574045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.654 [2024-07-24 20:08:52.574052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.654 qpair failed and we were unable to recover it. 00:29:04.654 [2024-07-24 20:08:52.574474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.654 [2024-07-24 20:08:52.574480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.654 qpair failed and we were unable to recover it. 00:29:04.654 [2024-07-24 20:08:52.574926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.654 [2024-07-24 20:08:52.574932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.654 qpair failed and we were unable to recover it. 00:29:04.654 [2024-07-24 20:08:52.575499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.654 [2024-07-24 20:08:52.575527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.654 qpair failed and we were unable to recover it. 00:29:04.654 [2024-07-24 20:08:52.575951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.654 [2024-07-24 20:08:52.575959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.654 qpair failed and we were unable to recover it. 00:29:04.654 [2024-07-24 20:08:52.576496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.654 [2024-07-24 20:08:52.576523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.654 qpair failed and we were unable to recover it. 00:29:04.654 [2024-07-24 20:08:52.576969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.654 [2024-07-24 20:08:52.576977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.654 qpair failed and we were unable to recover it. 00:29:04.654 [2024-07-24 20:08:52.577467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.654 [2024-07-24 20:08:52.577494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.654 qpair failed and we were unable to recover it. 00:29:04.654 [2024-07-24 20:08:52.577986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.654 [2024-07-24 20:08:52.577995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.654 qpair failed and we were unable to recover it. 00:29:04.654 [2024-07-24 20:08:52.578567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.654 [2024-07-24 20:08:52.578595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.654 qpair failed and we were unable to recover it. 00:29:04.654 [2024-07-24 20:08:52.579014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.654 [2024-07-24 20:08:52.579022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.654 qpair failed and we were unable to recover it. 00:29:04.654 [2024-07-24 20:08:52.579539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.654 [2024-07-24 20:08:52.579566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.654 qpair failed and we were unable to recover it. 00:29:04.654 [2024-07-24 20:08:52.580013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.654 [2024-07-24 20:08:52.580022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.654 qpair failed and we were unable to recover it. 00:29:04.654 [2024-07-24 20:08:52.580460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.654 [2024-07-24 20:08:52.580487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.654 qpair failed and we were unable to recover it. 00:29:04.654 [2024-07-24 20:08:52.580992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.654 [2024-07-24 20:08:52.581001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.654 qpair failed and we were unable to recover it. 00:29:04.654 [2024-07-24 20:08:52.581482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.654 [2024-07-24 20:08:52.581509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.654 qpair failed and we were unable to recover it. 00:29:04.654 [2024-07-24 20:08:52.581926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.654 [2024-07-24 20:08:52.581937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.654 qpair failed and we were unable to recover it. 00:29:04.654 [2024-07-24 20:08:52.582456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.654 [2024-07-24 20:08:52.582483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.654 qpair failed and we were unable to recover it. 00:29:04.654 [2024-07-24 20:08:52.582915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.654 [2024-07-24 20:08:52.582923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.654 qpair failed and we were unable to recover it. 00:29:04.654 [2024-07-24 20:08:52.583484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.654 [2024-07-24 20:08:52.583511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.654 qpair failed and we were unable to recover it. 00:29:04.654 [2024-07-24 20:08:52.583983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.654 [2024-07-24 20:08:52.583992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.654 qpair failed and we were unable to recover it. 00:29:04.654 [2024-07-24 20:08:52.584427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.654 [2024-07-24 20:08:52.584454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.654 qpair failed and we were unable to recover it. 00:29:04.654 [2024-07-24 20:08:52.584867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.654 [2024-07-24 20:08:52.584876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.654 qpair failed and we were unable to recover it. 00:29:04.654 [2024-07-24 20:08:52.585402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.654 [2024-07-24 20:08:52.585430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.654 qpair failed and we were unable to recover it. 00:29:04.654 [2024-07-24 20:08:52.585883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.654 [2024-07-24 20:08:52.585891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.654 qpair failed and we were unable to recover it. 00:29:04.654 [2024-07-24 20:08:52.586314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.654 [2024-07-24 20:08:52.586322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.654 qpair failed and we were unable to recover it. 00:29:04.654 [2024-07-24 20:08:52.586634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.654 [2024-07-24 20:08:52.586642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.654 qpair failed and we were unable to recover it. 00:29:04.654 [2024-07-24 20:08:52.587089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.654 [2024-07-24 20:08:52.587096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.654 qpair failed and we were unable to recover it. 00:29:04.654 [2024-07-24 20:08:52.587510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.654 [2024-07-24 20:08:52.587517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.654 qpair failed and we were unable to recover it. 00:29:04.654 [2024-07-24 20:08:52.587959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.654 [2024-07-24 20:08:52.587965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.654 qpair failed and we were unable to recover it. 00:29:04.654 [2024-07-24 20:08:52.588294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.654 [2024-07-24 20:08:52.588301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.654 qpair failed and we were unable to recover it. 00:29:04.654 [2024-07-24 20:08:52.588751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.654 [2024-07-24 20:08:52.588757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.654 qpair failed and we were unable to recover it. 00:29:04.654 [2024-07-24 20:08:52.589156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.654 [2024-07-24 20:08:52.589163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.654 qpair failed and we were unable to recover it. 00:29:04.654 [2024-07-24 20:08:52.589583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.654 [2024-07-24 20:08:52.589590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.654 qpair failed and we were unable to recover it. 00:29:04.654 [2024-07-24 20:08:52.589994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.654 [2024-07-24 20:08:52.590001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.654 qpair failed and we were unable to recover it. 00:29:04.654 [2024-07-24 20:08:52.590505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.655 [2024-07-24 20:08:52.590533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.655 qpair failed and we were unable to recover it. 00:29:04.655 [2024-07-24 20:08:52.590809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.655 [2024-07-24 20:08:52.590819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.655 qpair failed and we were unable to recover it. 00:29:04.655 [2024-07-24 20:08:52.591243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.655 [2024-07-24 20:08:52.591251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.655 qpair failed and we were unable to recover it. 00:29:04.655 [2024-07-24 20:08:52.591604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.655 [2024-07-24 20:08:52.591611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.655 qpair failed and we were unable to recover it. 00:29:04.655 [2024-07-24 20:08:52.591930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.655 [2024-07-24 20:08:52.591937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.655 qpair failed and we were unable to recover it. 00:29:04.655 [2024-07-24 20:08:52.592357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.655 [2024-07-24 20:08:52.592364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.655 qpair failed and we were unable to recover it. 00:29:04.655 [2024-07-24 20:08:52.592801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.655 [2024-07-24 20:08:52.592808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.655 qpair failed and we were unable to recover it. 00:29:04.655 [2024-07-24 20:08:52.593262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.655 [2024-07-24 20:08:52.593270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.655 qpair failed and we were unable to recover it. 00:29:04.655 [2024-07-24 20:08:52.593621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.655 [2024-07-24 20:08:52.593629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.655 qpair failed and we were unable to recover it. 00:29:04.655 [2024-07-24 20:08:52.594004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.655 [2024-07-24 20:08:52.594010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.655 qpair failed and we were unable to recover it. 00:29:04.655 [2024-07-24 20:08:52.594419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.655 [2024-07-24 20:08:52.594426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.655 qpair failed and we were unable to recover it. 00:29:04.655 [2024-07-24 20:08:52.594744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.655 [2024-07-24 20:08:52.594751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.655 qpair failed and we were unable to recover it. 00:29:04.655 [2024-07-24 20:08:52.595243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.655 [2024-07-24 20:08:52.595250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.655 qpair failed and we were unable to recover it. 00:29:04.655 [2024-07-24 20:08:52.595653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.655 [2024-07-24 20:08:52.595660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.655 qpair failed and we were unable to recover it. 00:29:04.655 [2024-07-24 20:08:52.596096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.655 [2024-07-24 20:08:52.596103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.655 qpair failed and we were unable to recover it. 00:29:04.655 [2024-07-24 20:08:52.596531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.655 [2024-07-24 20:08:52.596538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.655 qpair failed and we were unable to recover it. 00:29:04.655 [2024-07-24 20:08:52.596985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.655 [2024-07-24 20:08:52.596992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.655 qpair failed and we were unable to recover it. 00:29:04.655 [2024-07-24 20:08:52.597397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.655 [2024-07-24 20:08:52.597404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.655 qpair failed and we were unable to recover it. 00:29:04.655 [2024-07-24 20:08:52.597777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.655 [2024-07-24 20:08:52.597784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.655 qpair failed and we were unable to recover it. 00:29:04.655 [2024-07-24 20:08:52.598235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.655 [2024-07-24 20:08:52.598243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.655 qpair failed and we were unable to recover it. 00:29:04.655 [2024-07-24 20:08:52.598666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.655 [2024-07-24 20:08:52.598673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.655 qpair failed and we were unable to recover it. 00:29:04.655 [2024-07-24 20:08:52.599113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.655 [2024-07-24 20:08:52.599121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.655 qpair failed and we were unable to recover it. 00:29:04.655 [2024-07-24 20:08:52.599516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.655 [2024-07-24 20:08:52.599523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.655 qpair failed and we were unable to recover it. 00:29:04.655 [2024-07-24 20:08:52.599960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.655 [2024-07-24 20:08:52.599966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.655 qpair failed and we were unable to recover it. 00:29:04.655 [2024-07-24 20:08:52.600372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.655 [2024-07-24 20:08:52.600379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.655 qpair failed and we were unable to recover it. 00:29:04.655 [2024-07-24 20:08:52.600761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.655 [2024-07-24 20:08:52.600768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.655 qpair failed and we were unable to recover it. 00:29:04.655 [2024-07-24 20:08:52.601258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.655 [2024-07-24 20:08:52.601264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.655 qpair failed and we were unable to recover it. 00:29:04.655 [2024-07-24 20:08:52.601538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.655 [2024-07-24 20:08:52.601545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.655 qpair failed and we were unable to recover it. 00:29:04.655 [2024-07-24 20:08:52.601942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.655 [2024-07-24 20:08:52.601948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.655 qpair failed and we were unable to recover it. 00:29:04.655 [2024-07-24 20:08:52.602347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.655 [2024-07-24 20:08:52.602354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.655 qpair failed and we were unable to recover it. 00:29:04.924 [2024-07-24 20:08:52.602633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.924 [2024-07-24 20:08:52.602641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.924 qpair failed and we were unable to recover it. 00:29:04.924 [2024-07-24 20:08:52.603065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.924 [2024-07-24 20:08:52.603072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.924 qpair failed and we were unable to recover it. 00:29:04.924 [2024-07-24 20:08:52.603490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.924 [2024-07-24 20:08:52.603497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.924 qpair failed and we were unable to recover it. 00:29:04.924 [2024-07-24 20:08:52.603899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.924 [2024-07-24 20:08:52.603905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.924 qpair failed and we were unable to recover it. 00:29:04.924 [2024-07-24 20:08:52.604305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.924 [2024-07-24 20:08:52.604312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.924 qpair failed and we were unable to recover it. 00:29:04.924 [2024-07-24 20:08:52.604717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.924 [2024-07-24 20:08:52.604724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.924 qpair failed and we were unable to recover it. 00:29:04.924 [2024-07-24 20:08:52.605132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.924 [2024-07-24 20:08:52.605139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.924 qpair failed and we were unable to recover it. 00:29:04.924 [2024-07-24 20:08:52.605558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.924 [2024-07-24 20:08:52.605564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.924 qpair failed and we were unable to recover it. 00:29:04.924 [2024-07-24 20:08:52.606018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.924 [2024-07-24 20:08:52.606025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.924 qpair failed and we were unable to recover it. 00:29:04.924 [2024-07-24 20:08:52.606528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.924 [2024-07-24 20:08:52.606555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.924 qpair failed and we were unable to recover it. 00:29:04.924 [2024-07-24 20:08:52.607047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.924 [2024-07-24 20:08:52.607056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.924 qpair failed and we were unable to recover it. 00:29:04.925 [2024-07-24 20:08:52.607605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.925 [2024-07-24 20:08:52.607633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.925 qpair failed and we were unable to recover it. 00:29:04.925 [2024-07-24 20:08:52.608046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.925 [2024-07-24 20:08:52.608055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.925 qpair failed and we were unable to recover it. 00:29:04.925 [2024-07-24 20:08:52.608574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.925 [2024-07-24 20:08:52.608602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.925 qpair failed and we were unable to recover it. 00:29:04.925 [2024-07-24 20:08:52.609013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.925 [2024-07-24 20:08:52.609022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.925 qpair failed and we were unable to recover it. 00:29:04.925 [2024-07-24 20:08:52.609577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.925 [2024-07-24 20:08:52.609604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.925 qpair failed and we were unable to recover it. 00:29:04.925 [2024-07-24 20:08:52.610022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.925 [2024-07-24 20:08:52.610030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.925 qpair failed and we were unable to recover it. 00:29:04.925 [2024-07-24 20:08:52.610533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.925 [2024-07-24 20:08:52.610561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.925 qpair failed and we were unable to recover it. 00:29:04.925 [2024-07-24 20:08:52.610978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.925 [2024-07-24 20:08:52.610986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.925 qpair failed and we were unable to recover it. 00:29:04.925 [2024-07-24 20:08:52.611508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.925 [2024-07-24 20:08:52.611536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.925 qpair failed and we were unable to recover it. 00:29:04.925 [2024-07-24 20:08:52.611954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.925 [2024-07-24 20:08:52.611963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.925 qpair failed and we were unable to recover it. 00:29:04.925 [2024-07-24 20:08:52.612392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.925 [2024-07-24 20:08:52.612419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.925 qpair failed and we were unable to recover it. 00:29:04.925 [2024-07-24 20:08:52.612948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.925 [2024-07-24 20:08:52.612957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.925 qpair failed and we were unable to recover it. 00:29:04.925 [2024-07-24 20:08:52.613470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.925 [2024-07-24 20:08:52.613498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.925 qpair failed and we were unable to recover it. 00:29:04.925 [2024-07-24 20:08:52.613913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.925 [2024-07-24 20:08:52.613922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.925 qpair failed and we were unable to recover it. 00:29:04.925 [2024-07-24 20:08:52.614466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.925 [2024-07-24 20:08:52.614494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.925 qpair failed and we were unable to recover it. 00:29:04.925 [2024-07-24 20:08:52.614912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.925 [2024-07-24 20:08:52.614921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.925 qpair failed and we were unable to recover it. 00:29:04.925 [2024-07-24 20:08:52.615482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.925 [2024-07-24 20:08:52.615509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.925 qpair failed and we were unable to recover it. 00:29:04.925 [2024-07-24 20:08:52.615931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.925 [2024-07-24 20:08:52.615940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.925 qpair failed and we were unable to recover it. 00:29:04.925 [2024-07-24 20:08:52.616350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.925 [2024-07-24 20:08:52.616357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.925 qpair failed and we were unable to recover it. 00:29:04.925 [2024-07-24 20:08:52.616808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.925 [2024-07-24 20:08:52.616815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.925 qpair failed and we were unable to recover it. 00:29:04.925 [2024-07-24 20:08:52.617222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.925 [2024-07-24 20:08:52.617232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.925 qpair failed and we were unable to recover it. 00:29:04.925 [2024-07-24 20:08:52.617661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.925 [2024-07-24 20:08:52.617668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.925 qpair failed and we were unable to recover it. 00:29:04.925 [2024-07-24 20:08:52.618015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.925 [2024-07-24 20:08:52.618023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.925 qpair failed and we were unable to recover it. 00:29:04.925 [2024-07-24 20:08:52.618429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.925 [2024-07-24 20:08:52.618436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.925 qpair failed and we were unable to recover it. 00:29:04.925 [2024-07-24 20:08:52.618834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.925 [2024-07-24 20:08:52.618840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.925 qpair failed and we were unable to recover it. 00:29:04.925 [2024-07-24 20:08:52.619117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.925 [2024-07-24 20:08:52.619126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.925 qpair failed and we were unable to recover it. 00:29:04.925 [2024-07-24 20:08:52.619497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.925 [2024-07-24 20:08:52.619504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.925 qpair failed and we were unable to recover it. 00:29:04.925 [2024-07-24 20:08:52.619913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.925 [2024-07-24 20:08:52.619919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.925 qpair failed and we were unable to recover it. 00:29:04.925 [2024-07-24 20:08:52.620319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.925 [2024-07-24 20:08:52.620326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.925 qpair failed and we were unable to recover it. 00:29:04.925 [2024-07-24 20:08:52.620752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.925 [2024-07-24 20:08:52.620759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.925 qpair failed and we were unable to recover it. 00:29:04.925 [2024-07-24 20:08:52.621157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.925 [2024-07-24 20:08:52.621164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.925 qpair failed and we were unable to recover it. 00:29:04.925 [2024-07-24 20:08:52.621571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.925 [2024-07-24 20:08:52.621578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.925 qpair failed and we were unable to recover it. 00:29:04.925 [2024-07-24 20:08:52.621977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.925 [2024-07-24 20:08:52.621984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.925 qpair failed and we were unable to recover it. 00:29:04.925 [2024-07-24 20:08:52.622501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.925 [2024-07-24 20:08:52.622529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.925 qpair failed and we were unable to recover it. 00:29:04.925 [2024-07-24 20:08:52.622860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.926 [2024-07-24 20:08:52.622869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.926 qpair failed and we were unable to recover it. 00:29:04.926 [2024-07-24 20:08:52.623328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.926 [2024-07-24 20:08:52.623335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.926 qpair failed and we were unable to recover it. 00:29:04.926 [2024-07-24 20:08:52.623766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.926 [2024-07-24 20:08:52.623773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.926 qpair failed and we were unable to recover it. 00:29:04.926 [2024-07-24 20:08:52.624187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.926 [2024-07-24 20:08:52.624193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.926 qpair failed and we were unable to recover it. 00:29:04.926 [2024-07-24 20:08:52.624599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.926 [2024-07-24 20:08:52.624606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.926 qpair failed and we were unable to recover it. 00:29:04.926 [2024-07-24 20:08:52.625011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.926 [2024-07-24 20:08:52.625017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.926 qpair failed and we were unable to recover it. 00:29:04.926 [2024-07-24 20:08:52.625553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.926 [2024-07-24 20:08:52.625580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.926 qpair failed and we were unable to recover it. 00:29:04.926 [2024-07-24 20:08:52.626028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.926 [2024-07-24 20:08:52.626037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.926 qpair failed and we were unable to recover it. 00:29:04.926 [2024-07-24 20:08:52.626546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.926 [2024-07-24 20:08:52.626573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.926 qpair failed and we were unable to recover it. 00:29:04.926 [2024-07-24 20:08:52.626995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.926 [2024-07-24 20:08:52.627003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.926 qpair failed and we were unable to recover it. 00:29:04.926 [2024-07-24 20:08:52.627512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.926 [2024-07-24 20:08:52.627541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.926 qpair failed and we were unable to recover it. 00:29:04.926 [2024-07-24 20:08:52.627970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.926 [2024-07-24 20:08:52.627978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.926 qpair failed and we were unable to recover it. 00:29:04.926 [2024-07-24 20:08:52.628475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.926 [2024-07-24 20:08:52.628502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.926 qpair failed and we were unable to recover it. 00:29:04.926 [2024-07-24 20:08:52.628919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.926 [2024-07-24 20:08:52.628928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.926 qpair failed and we were unable to recover it. 00:29:04.926 [2024-07-24 20:08:52.629372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.926 [2024-07-24 20:08:52.629399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.926 qpair failed and we were unable to recover it. 00:29:04.926 [2024-07-24 20:08:52.629879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.926 [2024-07-24 20:08:52.629888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.926 qpair failed and we were unable to recover it. 00:29:04.926 [2024-07-24 20:08:52.630465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.926 [2024-07-24 20:08:52.630492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.926 qpair failed and we were unable to recover it. 00:29:04.926 [2024-07-24 20:08:52.630909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.926 [2024-07-24 20:08:52.630919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.926 qpair failed and we were unable to recover it. 00:29:04.926 [2024-07-24 20:08:52.631374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.926 [2024-07-24 20:08:52.631381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.926 qpair failed and we were unable to recover it. 00:29:04.926 [2024-07-24 20:08:52.631833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.926 [2024-07-24 20:08:52.631839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.926 qpair failed and we were unable to recover it. 00:29:04.926 [2024-07-24 20:08:52.632285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.926 [2024-07-24 20:08:52.632292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.926 qpair failed and we were unable to recover it. 00:29:04.926 [2024-07-24 20:08:52.632698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.926 [2024-07-24 20:08:52.632705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.926 qpair failed and we were unable to recover it. 00:29:04.926 [2024-07-24 20:08:52.633151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.926 [2024-07-24 20:08:52.633158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.926 qpair failed and we were unable to recover it. 00:29:04.926 [2024-07-24 20:08:52.633583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.926 [2024-07-24 20:08:52.633592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.926 qpair failed and we were unable to recover it. 00:29:04.926 [2024-07-24 20:08:52.634019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.926 [2024-07-24 20:08:52.634026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.926 qpair failed and we were unable to recover it. 00:29:04.926 [2024-07-24 20:08:52.634551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.926 [2024-07-24 20:08:52.634579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.926 qpair failed and we were unable to recover it. 00:29:04.926 [2024-07-24 20:08:52.635002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.926 [2024-07-24 20:08:52.635015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.926 qpair failed and we were unable to recover it. 00:29:04.926 [2024-07-24 20:08:52.635521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.926 [2024-07-24 20:08:52.635549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.926 qpair failed and we were unable to recover it. 00:29:04.926 [2024-07-24 20:08:52.635967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.926 [2024-07-24 20:08:52.635975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.926 qpair failed and we were unable to recover it. 00:29:04.926 [2024-07-24 20:08:52.636414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.926 [2024-07-24 20:08:52.636442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.926 qpair failed and we were unable to recover it. 00:29:04.926 [2024-07-24 20:08:52.636865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.926 [2024-07-24 20:08:52.636873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.926 qpair failed and we were unable to recover it. 00:29:04.926 [2024-07-24 20:08:52.637277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.926 [2024-07-24 20:08:52.637285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.926 qpair failed and we were unable to recover it. 00:29:04.926 [2024-07-24 20:08:52.637743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.926 [2024-07-24 20:08:52.637750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.926 qpair failed and we were unable to recover it. 00:29:04.926 [2024-07-24 20:08:52.638156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.926 [2024-07-24 20:08:52.638162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.926 qpair failed and we were unable to recover it. 00:29:04.926 [2024-07-24 20:08:52.638595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.926 [2024-07-24 20:08:52.638602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.926 qpair failed and we were unable to recover it. 00:29:04.926 [2024-07-24 20:08:52.638931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.927 [2024-07-24 20:08:52.638938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.927 qpair failed and we were unable to recover it. 00:29:04.927 [2024-07-24 20:08:52.639363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.927 [2024-07-24 20:08:52.639370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.927 qpair failed and we were unable to recover it. 00:29:04.927 [2024-07-24 20:08:52.639794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.927 [2024-07-24 20:08:52.639802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.927 qpair failed and we were unable to recover it. 00:29:04.927 [2024-07-24 20:08:52.640131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.927 [2024-07-24 20:08:52.640137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.927 qpair failed and we were unable to recover it. 00:29:04.927 [2024-07-24 20:08:52.640538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.927 [2024-07-24 20:08:52.640545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.927 qpair failed and we were unable to recover it. 00:29:04.927 [2024-07-24 20:08:52.640969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.927 [2024-07-24 20:08:52.640975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.927 qpair failed and we were unable to recover it. 00:29:04.927 [2024-07-24 20:08:52.641173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.927 [2024-07-24 20:08:52.641183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.927 qpair failed and we were unable to recover it. 00:29:04.927 [2024-07-24 20:08:52.641580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.927 [2024-07-24 20:08:52.641588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.927 qpair failed and we were unable to recover it. 00:29:04.927 [2024-07-24 20:08:52.642029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.927 [2024-07-24 20:08:52.642037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.927 qpair failed and we were unable to recover it. 00:29:04.927 [2024-07-24 20:08:52.642472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.927 [2024-07-24 20:08:52.642499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.927 qpair failed and we were unable to recover it. 00:29:04.927 [2024-07-24 20:08:52.642995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.927 [2024-07-24 20:08:52.643003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.927 qpair failed and we were unable to recover it. 00:29:04.927 [2024-07-24 20:08:52.643532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.927 [2024-07-24 20:08:52.643559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.927 qpair failed and we were unable to recover it. 00:29:04.927 [2024-07-24 20:08:52.643973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.927 [2024-07-24 20:08:52.643981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.927 qpair failed and we were unable to recover it. 00:29:04.927 [2024-07-24 20:08:52.644497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.927 [2024-07-24 20:08:52.644525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.927 qpair failed and we were unable to recover it. 00:29:04.927 [2024-07-24 20:08:52.644950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.927 [2024-07-24 20:08:52.644958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.927 qpair failed and we were unable to recover it. 00:29:04.927 [2024-07-24 20:08:52.645483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.927 [2024-07-24 20:08:52.645511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.927 qpair failed and we were unable to recover it. 00:29:04.927 [2024-07-24 20:08:52.645929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.927 [2024-07-24 20:08:52.645937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.927 qpair failed and we were unable to recover it. 00:29:04.927 [2024-07-24 20:08:52.646472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.927 [2024-07-24 20:08:52.646499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.927 qpair failed and we were unable to recover it. 00:29:04.927 [2024-07-24 20:08:52.646917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.927 [2024-07-24 20:08:52.646926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.927 qpair failed and we were unable to recover it. 00:29:04.927 [2024-07-24 20:08:52.647480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.927 [2024-07-24 20:08:52.647508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.927 qpair failed and we were unable to recover it. 00:29:04.927 [2024-07-24 20:08:52.647930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.927 [2024-07-24 20:08:52.647939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.927 qpair failed and we were unable to recover it. 00:29:04.927 [2024-07-24 20:08:52.648464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.927 [2024-07-24 20:08:52.648492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.927 qpair failed and we were unable to recover it. 00:29:04.927 [2024-07-24 20:08:52.648990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.927 [2024-07-24 20:08:52.648999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.927 qpair failed and we were unable to recover it. 00:29:04.927 [2024-07-24 20:08:52.649513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.927 [2024-07-24 20:08:52.649541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.927 qpair failed and we were unable to recover it. 00:29:04.927 [2024-07-24 20:08:52.649963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.927 [2024-07-24 20:08:52.649972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.927 qpair failed and we were unable to recover it. 00:29:04.927 [2024-07-24 20:08:52.650509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.927 [2024-07-24 20:08:52.650537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.927 qpair failed and we were unable to recover it. 00:29:04.927 [2024-07-24 20:08:52.650954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.927 [2024-07-24 20:08:52.650963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.927 qpair failed and we were unable to recover it. 00:29:04.927 [2024-07-24 20:08:52.651485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.927 [2024-07-24 20:08:52.651512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.927 qpair failed and we were unable to recover it. 00:29:04.927 [2024-07-24 20:08:52.651930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.927 [2024-07-24 20:08:52.651938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.927 qpair failed and we were unable to recover it. 00:29:04.927 [2024-07-24 20:08:52.652427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.927 [2024-07-24 20:08:52.652454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.927 qpair failed and we were unable to recover it. 00:29:04.927 [2024-07-24 20:08:52.652873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.927 [2024-07-24 20:08:52.652881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.927 qpair failed and we were unable to recover it. 00:29:04.927 [2024-07-24 20:08:52.653289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.927 [2024-07-24 20:08:52.653299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.927 qpair failed and we were unable to recover it. 00:29:04.927 [2024-07-24 20:08:52.653706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.927 [2024-07-24 20:08:52.653712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.927 qpair failed and we were unable to recover it. 00:29:04.927 [2024-07-24 20:08:52.654108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.927 [2024-07-24 20:08:52.654115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.927 qpair failed and we were unable to recover it. 00:29:04.927 [2024-07-24 20:08:52.654546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.928 [2024-07-24 20:08:52.654554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.928 qpair failed and we were unable to recover it. 00:29:04.928 [2024-07-24 20:08:52.654993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.928 [2024-07-24 20:08:52.655000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.928 qpair failed and we were unable to recover it. 00:29:04.928 [2024-07-24 20:08:52.655425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.928 [2024-07-24 20:08:52.655432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.928 qpair failed and we were unable to recover it. 00:29:04.928 [2024-07-24 20:08:52.655835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.928 [2024-07-24 20:08:52.655842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.928 qpair failed and we were unable to recover it. 00:29:04.928 [2024-07-24 20:08:52.656286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.928 [2024-07-24 20:08:52.656293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.928 qpair failed and we were unable to recover it. 00:29:04.928 [2024-07-24 20:08:52.656797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.928 [2024-07-24 20:08:52.656803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.928 qpair failed and we were unable to recover it. 00:29:04.928 [2024-07-24 20:08:52.657198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.928 [2024-07-24 20:08:52.657209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.928 qpair failed and we were unable to recover it. 00:29:04.928 [2024-07-24 20:08:52.657638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.928 [2024-07-24 20:08:52.657644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.928 qpair failed and we were unable to recover it. 00:29:04.928 [2024-07-24 20:08:52.658087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.928 [2024-07-24 20:08:52.658094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.928 qpair failed and we were unable to recover it. 00:29:04.928 [2024-07-24 20:08:52.658406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.928 [2024-07-24 20:08:52.658414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.928 qpair failed and we were unable to recover it. 00:29:04.928 [2024-07-24 20:08:52.658830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.928 [2024-07-24 20:08:52.658836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.928 qpair failed and we were unable to recover it. 00:29:04.928 [2024-07-24 20:08:52.659234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.928 [2024-07-24 20:08:52.659241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.928 qpair failed and we were unable to recover it. 00:29:04.928 [2024-07-24 20:08:52.659688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.928 [2024-07-24 20:08:52.659695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.928 qpair failed and we were unable to recover it. 00:29:04.928 [2024-07-24 20:08:52.660098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.928 [2024-07-24 20:08:52.660104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.928 qpair failed and we were unable to recover it. 00:29:04.928 [2024-07-24 20:08:52.660417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.928 [2024-07-24 20:08:52.660425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.928 qpair failed and we were unable to recover it. 00:29:04.928 [2024-07-24 20:08:52.660847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.928 [2024-07-24 20:08:52.660853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.928 qpair failed and we were unable to recover it. 00:29:04.928 [2024-07-24 20:08:52.661259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.928 [2024-07-24 20:08:52.661265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.928 qpair failed and we were unable to recover it. 00:29:04.928 [2024-07-24 20:08:52.661669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.928 [2024-07-24 20:08:52.661676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.928 qpair failed and we were unable to recover it. 00:29:04.928 [2024-07-24 20:08:52.661882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.928 [2024-07-24 20:08:52.661893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.928 qpair failed and we were unable to recover it. 00:29:04.928 [2024-07-24 20:08:52.662306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.928 [2024-07-24 20:08:52.662314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.928 qpair failed and we were unable to recover it. 00:29:04.928 [2024-07-24 20:08:52.662731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.928 [2024-07-24 20:08:52.662738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.928 qpair failed and we were unable to recover it. 00:29:04.928 [2024-07-24 20:08:52.663140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.928 [2024-07-24 20:08:52.663146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.928 qpair failed and we were unable to recover it. 00:29:04.928 [2024-07-24 20:08:52.663557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.928 [2024-07-24 20:08:52.663565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.928 qpair failed and we were unable to recover it. 00:29:04.928 [2024-07-24 20:08:52.663962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.928 [2024-07-24 20:08:52.663969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.928 qpair failed and we were unable to recover it. 00:29:04.928 [2024-07-24 20:08:52.664297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.928 [2024-07-24 20:08:52.664304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.928 qpair failed and we were unable to recover it. 00:29:04.928 [2024-07-24 20:08:52.664737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.928 [2024-07-24 20:08:52.664745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.928 qpair failed and we were unable to recover it. 00:29:04.928 [2024-07-24 20:08:52.665165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.928 [2024-07-24 20:08:52.665171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.928 qpair failed and we were unable to recover it. 00:29:04.928 [2024-07-24 20:08:52.665369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.928 [2024-07-24 20:08:52.665377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.928 qpair failed and we were unable to recover it. 00:29:04.928 [2024-07-24 20:08:52.665807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.928 [2024-07-24 20:08:52.665814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.928 qpair failed and we were unable to recover it. 00:29:04.928 [2024-07-24 20:08:52.666214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.928 [2024-07-24 20:08:52.666221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.928 qpair failed and we were unable to recover it. 00:29:04.928 [2024-07-24 20:08:52.666600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.928 [2024-07-24 20:08:52.666606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.928 qpair failed and we were unable to recover it. 00:29:04.928 [2024-07-24 20:08:52.667083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.928 [2024-07-24 20:08:52.667089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.928 qpair failed and we were unable to recover it. 00:29:04.928 [2024-07-24 20:08:52.667493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.928 [2024-07-24 20:08:52.667500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.928 qpair failed and we were unable to recover it. 00:29:04.928 [2024-07-24 20:08:52.667983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.928 [2024-07-24 20:08:52.667990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.928 qpair failed and we were unable to recover it. 00:29:04.928 [2024-07-24 20:08:52.668430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.928 [2024-07-24 20:08:52.668437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.929 qpair failed and we were unable to recover it. 00:29:04.929 [2024-07-24 20:08:52.668835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.929 [2024-07-24 20:08:52.668843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.929 qpair failed and we were unable to recover it. 00:29:04.929 [2024-07-24 20:08:52.669262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.929 [2024-07-24 20:08:52.669270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.929 qpair failed and we were unable to recover it. 00:29:04.929 [2024-07-24 20:08:52.669680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.929 [2024-07-24 20:08:52.669688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.929 qpair failed and we were unable to recover it. 00:29:04.929 [2024-07-24 20:08:52.669968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.929 [2024-07-24 20:08:52.669975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.929 qpair failed and we were unable to recover it. 00:29:04.929 [2024-07-24 20:08:52.670395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.929 [2024-07-24 20:08:52.670402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.929 qpair failed and we were unable to recover it. 00:29:04.929 [2024-07-24 20:08:52.670722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.929 [2024-07-24 20:08:52.670729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.929 qpair failed and we were unable to recover it. 00:29:04.929 [2024-07-24 20:08:52.671060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.929 [2024-07-24 20:08:52.671067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.929 qpair failed and we were unable to recover it. 00:29:04.929 [2024-07-24 20:08:52.671534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.929 [2024-07-24 20:08:52.671540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.929 qpair failed and we were unable to recover it. 00:29:04.929 [2024-07-24 20:08:52.671938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.929 [2024-07-24 20:08:52.671944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.929 qpair failed and we were unable to recover it. 00:29:04.929 [2024-07-24 20:08:52.672430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.929 [2024-07-24 20:08:52.672458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.929 qpair failed and we were unable to recover it. 00:29:04.929 [2024-07-24 20:08:52.672874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.929 [2024-07-24 20:08:52.672882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.929 qpair failed and we were unable to recover it. 00:29:04.929 [2024-07-24 20:08:52.673286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.929 [2024-07-24 20:08:52.673293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.929 qpair failed and we were unable to recover it. 00:29:04.929 [2024-07-24 20:08:52.673699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.929 [2024-07-24 20:08:52.673706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.929 qpair failed and we were unable to recover it. 00:29:04.929 [2024-07-24 20:08:52.674114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.929 [2024-07-24 20:08:52.674120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.929 qpair failed and we were unable to recover it. 00:29:04.929 [2024-07-24 20:08:52.674540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.929 [2024-07-24 20:08:52.674547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.929 qpair failed and we were unable to recover it. 00:29:04.929 [2024-07-24 20:08:52.674945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.929 [2024-07-24 20:08:52.674952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.929 qpair failed and we were unable to recover it. 00:29:04.929 [2024-07-24 20:08:52.675275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.929 [2024-07-24 20:08:52.675282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.929 qpair failed and we were unable to recover it. 00:29:04.929 [2024-07-24 20:08:52.675696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.929 [2024-07-24 20:08:52.675703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.929 qpair failed and we were unable to recover it. 00:29:04.929 [2024-07-24 20:08:52.676127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.929 [2024-07-24 20:08:52.676134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.929 qpair failed and we were unable to recover it. 00:29:04.929 [2024-07-24 20:08:52.676557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.929 [2024-07-24 20:08:52.676564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.929 qpair failed and we were unable to recover it. 00:29:04.929 [2024-07-24 20:08:52.677005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.929 [2024-07-24 20:08:52.677012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.929 qpair failed and we were unable to recover it. 00:29:04.929 [2024-07-24 20:08:52.677407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.929 [2024-07-24 20:08:52.677413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.929 qpair failed and we were unable to recover it. 00:29:04.929 [2024-07-24 20:08:52.677832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.929 [2024-07-24 20:08:52.677838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.929 qpair failed and we were unable to recover it. 00:29:04.929 [2024-07-24 20:08:52.678238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.929 [2024-07-24 20:08:52.678245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.929 qpair failed and we were unable to recover it. 00:29:04.929 [2024-07-24 20:08:52.678714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.929 [2024-07-24 20:08:52.678721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.929 qpair failed and we were unable to recover it. 00:29:04.929 [2024-07-24 20:08:52.679158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.929 [2024-07-24 20:08:52.679164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.929 qpair failed and we were unable to recover it. 00:29:04.929 [2024-07-24 20:08:52.679647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.929 [2024-07-24 20:08:52.679655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.929 qpair failed and we were unable to recover it. 00:29:04.929 [2024-07-24 20:08:52.680071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.929 [2024-07-24 20:08:52.680078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.929 qpair failed and we were unable to recover it. 00:29:04.929 [2024-07-24 20:08:52.680503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.930 [2024-07-24 20:08:52.680530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.930 qpair failed and we were unable to recover it. 00:29:04.930 [2024-07-24 20:08:52.680952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.930 [2024-07-24 20:08:52.680961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.930 qpair failed and we were unable to recover it. 00:29:04.930 [2024-07-24 20:08:52.681486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.930 [2024-07-24 20:08:52.681514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.930 qpair failed and we were unable to recover it. 00:29:04.930 [2024-07-24 20:08:52.681928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.930 [2024-07-24 20:08:52.681936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.930 qpair failed and we were unable to recover it. 00:29:04.930 [2024-07-24 20:08:52.682469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.930 [2024-07-24 20:08:52.682497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.930 qpair failed and we were unable to recover it. 00:29:04.930 [2024-07-24 20:08:52.682911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.930 [2024-07-24 20:08:52.682919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.930 qpair failed and we were unable to recover it. 00:29:04.930 [2024-07-24 20:08:52.683428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.930 [2024-07-24 20:08:52.683455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.930 qpair failed and we were unable to recover it. 00:29:04.930 [2024-07-24 20:08:52.683912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.930 [2024-07-24 20:08:52.683920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.930 qpair failed and we were unable to recover it. 00:29:04.930 [2024-07-24 20:08:52.684406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.930 [2024-07-24 20:08:52.684433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.930 qpair failed and we were unable to recover it. 00:29:04.930 [2024-07-24 20:08:52.684893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.930 [2024-07-24 20:08:52.684902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.930 qpair failed and we were unable to recover it. 00:29:04.930 [2024-07-24 20:08:52.685346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.930 [2024-07-24 20:08:52.685354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.930 qpair failed and we were unable to recover it. 00:29:04.930 [2024-07-24 20:08:52.685760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.930 [2024-07-24 20:08:52.685768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.930 qpair failed and we were unable to recover it. 00:29:04.930 [2024-07-24 20:08:52.686191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.930 [2024-07-24 20:08:52.686199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.930 qpair failed and we were unable to recover it. 00:29:04.930 [2024-07-24 20:08:52.686661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.930 [2024-07-24 20:08:52.686669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.930 qpair failed and we were unable to recover it. 00:29:04.930 [2024-07-24 20:08:52.687092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.930 [2024-07-24 20:08:52.687102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.930 qpair failed and we were unable to recover it. 00:29:04.930 [2024-07-24 20:08:52.687298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.930 [2024-07-24 20:08:52.687307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.930 qpair failed and we were unable to recover it. 00:29:04.930 [2024-07-24 20:08:52.687703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.930 [2024-07-24 20:08:52.687710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.930 qpair failed and we were unable to recover it. 00:29:04.930 [2024-07-24 20:08:52.688025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.930 [2024-07-24 20:08:52.688032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.930 qpair failed and we were unable to recover it. 00:29:04.930 [2024-07-24 20:08:52.688242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.930 [2024-07-24 20:08:52.688253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.930 qpair failed and we were unable to recover it. 00:29:04.930 [2024-07-24 20:08:52.688701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.930 [2024-07-24 20:08:52.688708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.930 qpair failed and we were unable to recover it. 00:29:04.930 [2024-07-24 20:08:52.688890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.930 [2024-07-24 20:08:52.688899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.930 qpair failed and we were unable to recover it. 00:29:04.930 [2024-07-24 20:08:52.689284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.930 [2024-07-24 20:08:52.689291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.930 qpair failed and we were unable to recover it. 00:29:04.930 [2024-07-24 20:08:52.689693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.930 [2024-07-24 20:08:52.689700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.930 qpair failed and we were unable to recover it. 00:29:04.930 [2024-07-24 20:08:52.690098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.930 [2024-07-24 20:08:52.690104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.930 qpair failed and we were unable to recover it. 00:29:04.930 [2024-07-24 20:08:52.690535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.930 [2024-07-24 20:08:52.690542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.930 qpair failed and we were unable to recover it. 00:29:04.930 [2024-07-24 20:08:52.690990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.930 [2024-07-24 20:08:52.690997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.930 qpair failed and we were unable to recover it. 00:29:04.930 [2024-07-24 20:08:52.691501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.930 [2024-07-24 20:08:52.691508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.930 qpair failed and we were unable to recover it. 00:29:04.930 [2024-07-24 20:08:52.691934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.930 [2024-07-24 20:08:52.691941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.930 qpair failed and we were unable to recover it. 00:29:04.930 [2024-07-24 20:08:52.692475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.930 [2024-07-24 20:08:52.692502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.930 qpair failed and we were unable to recover it. 00:29:04.930 [2024-07-24 20:08:52.692920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.930 [2024-07-24 20:08:52.692928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.930 qpair failed and we were unable to recover it. 00:29:04.930 [2024-07-24 20:08:52.693441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.930 [2024-07-24 20:08:52.693468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.930 qpair failed and we were unable to recover it. 00:29:04.930 [2024-07-24 20:08:52.693921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.930 [2024-07-24 20:08:52.693930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.930 qpair failed and we were unable to recover it. 00:29:04.930 [2024-07-24 20:08:52.694427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.930 [2024-07-24 20:08:52.694454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.930 qpair failed and we were unable to recover it. 00:29:04.930 [2024-07-24 20:08:52.694961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.930 [2024-07-24 20:08:52.694969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.930 qpair failed and we were unable to recover it. 00:29:04.930 [2024-07-24 20:08:52.695488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.930 [2024-07-24 20:08:52.695516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.930 qpair failed and we were unable to recover it. 00:29:04.930 [2024-07-24 20:08:52.695824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.930 [2024-07-24 20:08:52.695833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.931 qpair failed and we were unable to recover it. 00:29:04.931 [2024-07-24 20:08:52.696290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-24 20:08:52.696297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.931 qpair failed and we were unable to recover it. 00:29:04.931 [2024-07-24 20:08:52.696731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-24 20:08:52.696737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.931 qpair failed and we were unable to recover it. 00:29:04.931 [2024-07-24 20:08:52.697140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-24 20:08:52.697146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.931 qpair failed and we were unable to recover it. 00:29:04.931 [2024-07-24 20:08:52.697613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-24 20:08:52.697620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.931 qpair failed and we were unable to recover it. 00:29:04.931 [2024-07-24 20:08:52.698111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-24 20:08:52.698117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.931 qpair failed and we were unable to recover it. 00:29:04.931 [2024-07-24 20:08:52.698511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-24 20:08:52.698518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.931 qpair failed and we were unable to recover it. 00:29:04.931 [2024-07-24 20:08:52.698915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-24 20:08:52.698921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.931 qpair failed and we were unable to recover it. 00:29:04.931 [2024-07-24 20:08:52.699402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-24 20:08:52.699410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.931 qpair failed and we were unable to recover it. 00:29:04.931 [2024-07-24 20:08:52.699804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-24 20:08:52.699811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.931 qpair failed and we were unable to recover it. 00:29:04.931 [2024-07-24 20:08:52.700097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-24 20:08:52.700105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.931 qpair failed and we were unable to recover it. 00:29:04.931 [2024-07-24 20:08:52.700530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-24 20:08:52.700538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.931 qpair failed and we were unable to recover it. 00:29:04.931 [2024-07-24 20:08:52.700960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-24 20:08:52.700967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.931 qpair failed and we were unable to recover it. 00:29:04.931 [2024-07-24 20:08:52.701396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-24 20:08:52.701403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.931 qpair failed and we were unable to recover it. 00:29:04.931 [2024-07-24 20:08:52.701798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-24 20:08:52.701804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.931 qpair failed and we were unable to recover it. 00:29:04.931 [2024-07-24 20:08:52.702105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-24 20:08:52.702113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.931 qpair failed and we were unable to recover it. 00:29:04.931 [2024-07-24 20:08:52.702537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-24 20:08:52.702544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.931 qpair failed and we were unable to recover it. 00:29:04.931 [2024-07-24 20:08:52.702940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-24 20:08:52.702946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.931 qpair failed and we were unable to recover it. 00:29:04.931 [2024-07-24 20:08:52.703345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-24 20:08:52.703352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.931 qpair failed and we were unable to recover it. 00:29:04.931 [2024-07-24 20:08:52.703866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-24 20:08:52.703874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.931 qpair failed and we were unable to recover it. 00:29:04.931 [2024-07-24 20:08:52.704281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-24 20:08:52.704288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.931 qpair failed and we were unable to recover it. 00:29:04.931 [2024-07-24 20:08:52.704695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-24 20:08:52.704701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.931 qpair failed and we were unable to recover it. 00:29:04.931 [2024-07-24 20:08:52.705105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-24 20:08:52.705111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.931 qpair failed and we were unable to recover it. 00:29:04.931 [2024-07-24 20:08:52.705529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-24 20:08:52.705536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.931 qpair failed and we were unable to recover it. 00:29:04.931 [2024-07-24 20:08:52.706010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-24 20:08:52.706017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.931 qpair failed and we were unable to recover it. 00:29:04.931 [2024-07-24 20:08:52.706487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-24 20:08:52.706495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.931 qpair failed and we were unable to recover it. 00:29:04.931 [2024-07-24 20:08:52.706786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-24 20:08:52.706793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.931 qpair failed and we were unable to recover it. 00:29:04.931 [2024-07-24 20:08:52.707229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-24 20:08:52.707236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.931 qpair failed and we were unable to recover it. 00:29:04.931 [2024-07-24 20:08:52.707447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-24 20:08:52.707457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.931 qpair failed and we were unable to recover it. 00:29:04.931 [2024-07-24 20:08:52.707874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-24 20:08:52.707880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.931 qpair failed and we were unable to recover it. 00:29:04.931 [2024-07-24 20:08:52.708322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-24 20:08:52.708329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.931 qpair failed and we were unable to recover it. 00:29:04.931 [2024-07-24 20:08:52.708773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-24 20:08:52.708779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.931 qpair failed and we were unable to recover it. 00:29:04.931 [2024-07-24 20:08:52.708982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-24 20:08:52.708990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.931 qpair failed and we were unable to recover it. 00:29:04.931 [2024-07-24 20:08:52.709398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-24 20:08:52.709405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.931 qpair failed and we were unable to recover it. 00:29:04.931 [2024-07-24 20:08:52.709804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-24 20:08:52.709811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.931 qpair failed and we were unable to recover it. 00:29:04.931 [2024-07-24 20:08:52.710213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-24 20:08:52.710220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.931 qpair failed and we were unable to recover it. 00:29:04.931 [2024-07-24 20:08:52.710630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-24 20:08:52.710637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.932 qpair failed and we were unable to recover it. 00:29:04.932 [2024-07-24 20:08:52.711058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.932 [2024-07-24 20:08:52.711065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.932 qpair failed and we were unable to recover it. 00:29:04.932 [2024-07-24 20:08:52.711479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.932 [2024-07-24 20:08:52.711486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.932 qpair failed and we were unable to recover it. 00:29:04.932 [2024-07-24 20:08:52.711885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.932 [2024-07-24 20:08:52.711892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.932 qpair failed and we were unable to recover it. 00:29:04.932 [2024-07-24 20:08:52.712335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.932 [2024-07-24 20:08:52.712342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.932 qpair failed and we were unable to recover it. 00:29:04.932 [2024-07-24 20:08:52.712744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.932 [2024-07-24 20:08:52.712751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.932 qpair failed and we were unable to recover it. 00:29:04.932 [2024-07-24 20:08:52.713158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.932 [2024-07-24 20:08:52.713166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.932 qpair failed and we were unable to recover it. 00:29:04.932 [2024-07-24 20:08:52.713581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.932 [2024-07-24 20:08:52.713588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.932 qpair failed and we were unable to recover it. 00:29:04.932 [2024-07-24 20:08:52.713914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.932 [2024-07-24 20:08:52.713921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.932 qpair failed and we were unable to recover it. 00:29:04.932 [2024-07-24 20:08:52.714240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.932 [2024-07-24 20:08:52.714247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.932 qpair failed and we were unable to recover it. 00:29:04.932 [2024-07-24 20:08:52.714685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.932 [2024-07-24 20:08:52.714691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.932 qpair failed and we were unable to recover it. 00:29:04.932 [2024-07-24 20:08:52.715136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.932 [2024-07-24 20:08:52.715142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.932 qpair failed and we were unable to recover it. 00:29:04.932 [2024-07-24 20:08:52.715548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.932 [2024-07-24 20:08:52.715555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.932 qpair failed and we were unable to recover it. 00:29:04.932 [2024-07-24 20:08:52.715879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.932 [2024-07-24 20:08:52.715886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.932 qpair failed and we were unable to recover it. 00:29:04.932 [2024-07-24 20:08:52.716354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.932 [2024-07-24 20:08:52.716361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.932 qpair failed and we were unable to recover it. 00:29:04.932 [2024-07-24 20:08:52.716784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.932 [2024-07-24 20:08:52.716791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.932 qpair failed and we were unable to recover it. 00:29:04.932 [2024-07-24 20:08:52.717188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.932 [2024-07-24 20:08:52.717194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.932 qpair failed and we were unable to recover it. 00:29:04.932 [2024-07-24 20:08:52.717598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.932 [2024-07-24 20:08:52.717605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.932 qpair failed and we were unable to recover it. 00:29:04.932 [2024-07-24 20:08:52.718076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.932 [2024-07-24 20:08:52.718083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.932 qpair failed and we were unable to recover it. 00:29:04.932 [2024-07-24 20:08:52.718616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.932 [2024-07-24 20:08:52.718643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.932 qpair failed and we were unable to recover it. 00:29:04.932 [2024-07-24 20:08:52.719060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.932 [2024-07-24 20:08:52.719069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.932 qpair failed and we were unable to recover it. 00:29:04.932 [2024-07-24 20:08:52.719574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.932 [2024-07-24 20:08:52.719602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.932 qpair failed and we were unable to recover it. 00:29:04.932 [2024-07-24 20:08:52.720017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.932 [2024-07-24 20:08:52.720025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.932 qpair failed and we were unable to recover it. 00:29:04.932 [2024-07-24 20:08:52.720528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.932 [2024-07-24 20:08:52.720559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.932 qpair failed and we were unable to recover it. 00:29:04.932 [2024-07-24 20:08:52.720970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.932 [2024-07-24 20:08:52.720979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.932 qpair failed and we were unable to recover it. 00:29:04.932 [2024-07-24 20:08:52.721517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.932 [2024-07-24 20:08:52.721545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.932 qpair failed and we were unable to recover it. 00:29:04.932 [2024-07-24 20:08:52.721992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.932 [2024-07-24 20:08:52.722000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.932 qpair failed and we were unable to recover it. 00:29:04.932 [2024-07-24 20:08:52.722533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.932 [2024-07-24 20:08:52.722560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.932 qpair failed and we were unable to recover it. 00:29:04.932 [2024-07-24 20:08:52.722996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.932 [2024-07-24 20:08:52.723005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.932 qpair failed and we were unable to recover it. 00:29:04.932 [2024-07-24 20:08:52.723545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.932 [2024-07-24 20:08:52.723573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.932 qpair failed and we were unable to recover it. 00:29:04.932 [2024-07-24 20:08:52.723992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.932 [2024-07-24 20:08:52.724000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.932 qpair failed and we were unable to recover it. 00:29:04.932 [2024-07-24 20:08:52.724518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.932 [2024-07-24 20:08:52.724545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.932 qpair failed and we were unable to recover it. 00:29:04.932 [2024-07-24 20:08:52.724990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.932 [2024-07-24 20:08:52.724999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.932 qpair failed and we were unable to recover it. 00:29:04.932 [2024-07-24 20:08:52.725510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.932 [2024-07-24 20:08:52.725537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.932 qpair failed and we were unable to recover it. 00:29:04.932 [2024-07-24 20:08:52.725951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.932 [2024-07-24 20:08:52.725959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.932 qpair failed and we were unable to recover it. 00:29:04.933 [2024-07-24 20:08:52.726491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.933 [2024-07-24 20:08:52.726518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.933 qpair failed and we were unable to recover it. 00:29:04.933 [2024-07-24 20:08:52.726934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.933 [2024-07-24 20:08:52.726942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.933 qpair failed and we were unable to recover it. 00:29:04.933 [2024-07-24 20:08:52.727471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.933 [2024-07-24 20:08:52.727498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.933 qpair failed and we were unable to recover it. 00:29:04.933 [2024-07-24 20:08:52.727947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.933 [2024-07-24 20:08:52.727956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.933 qpair failed and we were unable to recover it. 00:29:04.933 [2024-07-24 20:08:52.728456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.933 [2024-07-24 20:08:52.728483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.933 qpair failed and we were unable to recover it. 00:29:04.933 [2024-07-24 20:08:52.728899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.933 [2024-07-24 20:08:52.728908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.933 qpair failed and we were unable to recover it. 00:29:04.933 [2024-07-24 20:08:52.729421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.933 [2024-07-24 20:08:52.729448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.933 qpair failed and we were unable to recover it. 00:29:04.933 [2024-07-24 20:08:52.729861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.933 [2024-07-24 20:08:52.729869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.933 qpair failed and we were unable to recover it. 00:29:04.933 [2024-07-24 20:08:52.730323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.933 [2024-07-24 20:08:52.730330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.933 qpair failed and we were unable to recover it. 00:29:04.933 [2024-07-24 20:08:52.730731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.933 [2024-07-24 20:08:52.730738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.933 qpair failed and we were unable to recover it. 00:29:04.933 [2024-07-24 20:08:52.731178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.933 [2024-07-24 20:08:52.731184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.933 qpair failed and we were unable to recover it. 00:29:04.933 [2024-07-24 20:08:52.731603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.933 [2024-07-24 20:08:52.731610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.933 qpair failed and we were unable to recover it. 00:29:04.933 [2024-07-24 20:08:52.732010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.933 [2024-07-24 20:08:52.732016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.933 qpair failed and we were unable to recover it. 00:29:04.933 [2024-07-24 20:08:52.732514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.933 [2024-07-24 20:08:52.732542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.933 qpair failed and we were unable to recover it. 00:29:04.933 [2024-07-24 20:08:52.732958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.933 [2024-07-24 20:08:52.732966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.933 qpair failed and we were unable to recover it. 00:29:04.933 [2024-07-24 20:08:52.733504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.933 [2024-07-24 20:08:52.733532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.933 qpair failed and we were unable to recover it. 00:29:04.933 [2024-07-24 20:08:52.733956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.933 [2024-07-24 20:08:52.733964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.933 qpair failed and we were unable to recover it. 00:29:04.933 [2024-07-24 20:08:52.734490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.933 [2024-07-24 20:08:52.734518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.933 qpair failed and we were unable to recover it. 00:29:04.933 [2024-07-24 20:08:52.734831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.933 [2024-07-24 20:08:52.734840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.933 qpair failed and we were unable to recover it. 00:29:04.933 [2024-07-24 20:08:52.735276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.933 [2024-07-24 20:08:52.735284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.933 qpair failed and we were unable to recover it. 00:29:04.933 [2024-07-24 20:08:52.735707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.933 [2024-07-24 20:08:52.735714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.933 qpair failed and we were unable to recover it. 00:29:04.933 [2024-07-24 20:08:52.736158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.933 [2024-07-24 20:08:52.736164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.933 qpair failed and we were unable to recover it. 00:29:04.933 [2024-07-24 20:08:52.736656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.933 [2024-07-24 20:08:52.736663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.933 qpair failed and we were unable to recover it. 00:29:04.933 [2024-07-24 20:08:52.737061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.933 [2024-07-24 20:08:52.737067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.933 qpair failed and we were unable to recover it. 00:29:04.933 [2024-07-24 20:08:52.737595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.933 [2024-07-24 20:08:52.737623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.933 qpair failed and we were unable to recover it. 00:29:04.933 [2024-07-24 20:08:52.737956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.933 [2024-07-24 20:08:52.737964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.933 qpair failed and we were unable to recover it. 00:29:04.933 [2024-07-24 20:08:52.738501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.933 [2024-07-24 20:08:52.738528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.933 qpair failed and we were unable to recover it. 00:29:04.933 [2024-07-24 20:08:52.738849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.933 [2024-07-24 20:08:52.738857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.933 qpair failed and we were unable to recover it. 00:29:04.933 [2024-07-24 20:08:52.739288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.933 [2024-07-24 20:08:52.739299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.933 qpair failed and we were unable to recover it. 00:29:04.933 [2024-07-24 20:08:52.739780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.933 [2024-07-24 20:08:52.739787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.933 qpair failed and we were unable to recover it. 00:29:04.933 [2024-07-24 20:08:52.740226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.933 [2024-07-24 20:08:52.740233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.933 qpair failed and we were unable to recover it. 00:29:04.934 [2024-07-24 20:08:52.740653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.934 [2024-07-24 20:08:52.740660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.934 qpair failed and we were unable to recover it. 00:29:04.934 [2024-07-24 20:08:52.741062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.934 [2024-07-24 20:08:52.741070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.934 qpair failed and we were unable to recover it. 00:29:04.934 [2024-07-24 20:08:52.741298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.934 [2024-07-24 20:08:52.741305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.934 qpair failed and we were unable to recover it. 00:29:04.934 [2024-07-24 20:08:52.741636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.934 [2024-07-24 20:08:52.741643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.934 qpair failed and we were unable to recover it. 00:29:04.934 [2024-07-24 20:08:52.742052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.934 [2024-07-24 20:08:52.742059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.934 qpair failed and we were unable to recover it. 00:29:04.934 [2024-07-24 20:08:52.742459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.934 [2024-07-24 20:08:52.742466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.934 qpair failed and we were unable to recover it. 00:29:04.934 [2024-07-24 20:08:52.742948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.934 [2024-07-24 20:08:52.742955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.934 qpair failed and we were unable to recover it. 00:29:04.934 [2024-07-24 20:08:52.743519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.934 [2024-07-24 20:08:52.743547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.934 qpair failed and we were unable to recover it. 00:29:04.934 [2024-07-24 20:08:52.743994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.934 [2024-07-24 20:08:52.744003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.934 qpair failed and we were unable to recover it. 00:29:04.934 [2024-07-24 20:08:52.744514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.934 [2024-07-24 20:08:52.744541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.934 qpair failed and we were unable to recover it. 00:29:04.934 [2024-07-24 20:08:52.744953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.934 [2024-07-24 20:08:52.744962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.934 qpair failed and we were unable to recover it. 00:29:04.934 [2024-07-24 20:08:52.745481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.934 [2024-07-24 20:08:52.745509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.934 qpair failed and we were unable to recover it. 00:29:04.934 [2024-07-24 20:08:52.745922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.934 [2024-07-24 20:08:52.745930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.934 qpair failed and we were unable to recover it. 00:29:04.934 [2024-07-24 20:08:52.746464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.934 [2024-07-24 20:08:52.746492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.934 qpair failed and we were unable to recover it. 00:29:04.934 [2024-07-24 20:08:52.746908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.934 [2024-07-24 20:08:52.746917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.934 qpair failed and we were unable to recover it. 00:29:04.934 [2024-07-24 20:08:52.747356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.934 [2024-07-24 20:08:52.747363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.934 qpair failed and we were unable to recover it. 00:29:04.934 [2024-07-24 20:08:52.747789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.934 [2024-07-24 20:08:52.747796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.934 qpair failed and we were unable to recover it. 00:29:04.934 [2024-07-24 20:08:52.748240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.934 [2024-07-24 20:08:52.748248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.934 qpair failed and we were unable to recover it. 00:29:04.934 [2024-07-24 20:08:52.748674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.934 [2024-07-24 20:08:52.748681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.934 qpair failed and we were unable to recover it. 00:29:04.934 [2024-07-24 20:08:52.749131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.934 [2024-07-24 20:08:52.749138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.934 qpair failed and we were unable to recover it. 00:29:04.934 [2024-07-24 20:08:52.749547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.934 [2024-07-24 20:08:52.749554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.934 qpair failed and we were unable to recover it. 00:29:04.934 [2024-07-24 20:08:52.749832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.934 [2024-07-24 20:08:52.749840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.934 qpair failed and we were unable to recover it. 00:29:04.934 [2024-07-24 20:08:52.750263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.934 [2024-07-24 20:08:52.750269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.934 qpair failed and we were unable to recover it. 00:29:04.934 [2024-07-24 20:08:52.750654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.934 [2024-07-24 20:08:52.750661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.934 qpair failed and we were unable to recover it. 00:29:04.934 [2024-07-24 20:08:52.750940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.934 [2024-07-24 20:08:52.750947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.934 qpair failed and we were unable to recover it. 00:29:04.934 [2024-07-24 20:08:52.751365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.934 [2024-07-24 20:08:52.751372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.934 qpair failed and we were unable to recover it. 00:29:04.934 [2024-07-24 20:08:52.751765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.934 [2024-07-24 20:08:52.751771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.934 qpair failed and we were unable to recover it. 00:29:04.934 [2024-07-24 20:08:52.752184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.934 [2024-07-24 20:08:52.752191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.934 qpair failed and we were unable to recover it. 00:29:04.935 [2024-07-24 20:08:52.752613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.935 [2024-07-24 20:08:52.752619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.935 qpair failed and we were unable to recover it. 00:29:04.935 [2024-07-24 20:08:52.753059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.935 [2024-07-24 20:08:52.753066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.935 qpair failed and we were unable to recover it. 00:29:04.935 [2024-07-24 20:08:52.753573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.935 [2024-07-24 20:08:52.753601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.935 qpair failed and we were unable to recover it. 00:29:04.935 [2024-07-24 20:08:52.754019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.935 [2024-07-24 20:08:52.754027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.935 qpair failed and we were unable to recover it. 00:29:04.935 [2024-07-24 20:08:52.754547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.935 [2024-07-24 20:08:52.754574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.935 qpair failed and we were unable to recover it. 00:29:04.935 [2024-07-24 20:08:52.754991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.935 [2024-07-24 20:08:52.754999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.935 qpair failed and we were unable to recover it. 00:29:04.935 [2024-07-24 20:08:52.755506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.935 [2024-07-24 20:08:52.755534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.935 qpair failed and we were unable to recover it. 00:29:04.935 [2024-07-24 20:08:52.755946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.935 [2024-07-24 20:08:52.755955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.935 qpair failed and we were unable to recover it. 00:29:04.935 [2024-07-24 20:08:52.756467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.935 [2024-07-24 20:08:52.756494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.935 qpair failed and we were unable to recover it. 00:29:04.935 [2024-07-24 20:08:52.756909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.935 [2024-07-24 20:08:52.756921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.935 qpair failed and we were unable to recover it. 00:29:04.935 [2024-07-24 20:08:52.757461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.935 [2024-07-24 20:08:52.757488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.935 qpair failed and we were unable to recover it. 00:29:04.935 [2024-07-24 20:08:52.757902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.935 [2024-07-24 20:08:52.757910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.935 qpair failed and we were unable to recover it. 00:29:04.935 [2024-07-24 20:08:52.758320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.935 [2024-07-24 20:08:52.758327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.935 qpair failed and we were unable to recover it. 00:29:04.935 [2024-07-24 20:08:52.758826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.935 [2024-07-24 20:08:52.758833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.935 qpair failed and we were unable to recover it. 00:29:04.935 [2024-07-24 20:08:52.759230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.935 [2024-07-24 20:08:52.759237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.935 qpair failed and we were unable to recover it. 00:29:04.935 [2024-07-24 20:08:52.759670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.935 [2024-07-24 20:08:52.759677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.935 qpair failed and we were unable to recover it. 00:29:04.935 [2024-07-24 20:08:52.760075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.935 [2024-07-24 20:08:52.760083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.935 qpair failed and we were unable to recover it. 00:29:04.935 [2024-07-24 20:08:52.760541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.935 [2024-07-24 20:08:52.760549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.935 qpair failed and we were unable to recover it. 00:29:04.935 [2024-07-24 20:08:52.760960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.935 [2024-07-24 20:08:52.760967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.935 qpair failed and we were unable to recover it. 00:29:04.935 [2024-07-24 20:08:52.761497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.935 [2024-07-24 20:08:52.761525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.935 qpair failed and we were unable to recover it. 00:29:04.935 [2024-07-24 20:08:52.761939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.935 [2024-07-24 20:08:52.761947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.935 qpair failed and we were unable to recover it. 00:29:04.935 [2024-07-24 20:08:52.762444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.935 [2024-07-24 20:08:52.762471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.935 qpair failed and we were unable to recover it. 00:29:04.935 [2024-07-24 20:08:52.762919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.935 [2024-07-24 20:08:52.762927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.935 qpair failed and we were unable to recover it. 00:29:04.935 [2024-07-24 20:08:52.763519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.935 [2024-07-24 20:08:52.763547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.935 qpair failed and we were unable to recover it. 00:29:04.935 [2024-07-24 20:08:52.763967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.935 [2024-07-24 20:08:52.763975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.935 qpair failed and we were unable to recover it. 00:29:04.935 [2024-07-24 20:08:52.764483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.935 [2024-07-24 20:08:52.764511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.935 qpair failed and we were unable to recover it. 00:29:04.935 [2024-07-24 20:08:52.764721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.935 [2024-07-24 20:08:52.764731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.935 qpair failed and we were unable to recover it. 00:29:04.935 [2024-07-24 20:08:52.765166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.935 [2024-07-24 20:08:52.765173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.935 qpair failed and we were unable to recover it. 00:29:04.935 [2024-07-24 20:08:52.765578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.935 [2024-07-24 20:08:52.765585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.935 qpair failed and we were unable to recover it. 00:29:04.935 [2024-07-24 20:08:52.765988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.935 [2024-07-24 20:08:52.765994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.935 qpair failed and we were unable to recover it. 00:29:04.935 [2024-07-24 20:08:52.766492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.935 [2024-07-24 20:08:52.766520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.935 qpair failed and we were unable to recover it. 00:29:04.935 [2024-07-24 20:08:52.766937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.935 [2024-07-24 20:08:52.766945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.935 qpair failed and we were unable to recover it. 00:29:04.935 [2024-07-24 20:08:52.767481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.935 [2024-07-24 20:08:52.767508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.935 qpair failed and we were unable to recover it. 00:29:04.935 [2024-07-24 20:08:52.768000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.936 [2024-07-24 20:08:52.768009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.936 qpair failed and we were unable to recover it. 00:29:04.936 [2024-07-24 20:08:52.768523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.936 [2024-07-24 20:08:52.768550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.936 qpair failed and we were unable to recover it. 00:29:04.936 [2024-07-24 20:08:52.768971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.936 [2024-07-24 20:08:52.768980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.936 qpair failed and we were unable to recover it. 00:29:04.936 [2024-07-24 20:08:52.769506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.936 [2024-07-24 20:08:52.769536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.936 qpair failed and we were unable to recover it. 00:29:04.936 [2024-07-24 20:08:52.769949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.936 [2024-07-24 20:08:52.769958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.936 qpair failed and we were unable to recover it. 00:29:04.936 [2024-07-24 20:08:52.770401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.936 [2024-07-24 20:08:52.770429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.936 qpair failed and we were unable to recover it. 00:29:04.936 [2024-07-24 20:08:52.770852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.936 [2024-07-24 20:08:52.770860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.936 qpair failed and we were unable to recover it. 00:29:04.936 [2024-07-24 20:08:52.771262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.936 [2024-07-24 20:08:52.771269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.936 qpair failed and we were unable to recover it. 00:29:04.936 [2024-07-24 20:08:52.771715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.936 [2024-07-24 20:08:52.771721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.936 qpair failed and we were unable to recover it. 00:29:04.936 [2024-07-24 20:08:52.772122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.936 [2024-07-24 20:08:52.772129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.936 qpair failed and we were unable to recover it. 00:29:04.936 [2024-07-24 20:08:52.772546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.936 [2024-07-24 20:08:52.772553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.936 qpair failed and we were unable to recover it. 00:29:04.936 [2024-07-24 20:08:52.772953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.936 [2024-07-24 20:08:52.772960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.936 qpair failed and we were unable to recover it. 00:29:04.936 [2024-07-24 20:08:52.773381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.936 [2024-07-24 20:08:52.773389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.936 qpair failed and we were unable to recover it. 00:29:04.936 [2024-07-24 20:08:52.773830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.936 [2024-07-24 20:08:52.773836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.936 qpair failed and we were unable to recover it. 00:29:04.936 [2024-07-24 20:08:52.774236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.936 [2024-07-24 20:08:52.774243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.936 qpair failed and we were unable to recover it. 00:29:04.936 [2024-07-24 20:08:52.774640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.936 [2024-07-24 20:08:52.774646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.936 qpair failed and we were unable to recover it. 00:29:04.936 [2024-07-24 20:08:52.775066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.936 [2024-07-24 20:08:52.775073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.936 qpair failed and we were unable to recover it. 00:29:04.936 [2024-07-24 20:08:52.775582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.936 [2024-07-24 20:08:52.775589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.936 qpair failed and we were unable to recover it. 00:29:04.936 [2024-07-24 20:08:52.775997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.936 [2024-07-24 20:08:52.776003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.936 qpair failed and we were unable to recover it. 00:29:04.936 [2024-07-24 20:08:52.776513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.936 [2024-07-24 20:08:52.776540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.936 qpair failed and we were unable to recover it. 00:29:04.936 [2024-07-24 20:08:52.776910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.936 [2024-07-24 20:08:52.776918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.936 qpair failed and we were unable to recover it. 00:29:04.936 [2024-07-24 20:08:52.777484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.936 [2024-07-24 20:08:52.777511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.936 qpair failed and we were unable to recover it. 00:29:04.936 [2024-07-24 20:08:52.777966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.936 [2024-07-24 20:08:52.777974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.936 qpair failed and we were unable to recover it. 00:29:04.936 [2024-07-24 20:08:52.778488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.936 [2024-07-24 20:08:52.778515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.936 qpair failed and we were unable to recover it. 00:29:04.936 [2024-07-24 20:08:52.778846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.936 [2024-07-24 20:08:52.778855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.936 qpair failed and we were unable to recover it. 00:29:04.936 [2024-07-24 20:08:52.779289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.936 [2024-07-24 20:08:52.779297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.936 qpair failed and we were unable to recover it. 00:29:04.936 [2024-07-24 20:08:52.779733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.936 [2024-07-24 20:08:52.779740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.936 qpair failed and we were unable to recover it. 00:29:04.936 [2024-07-24 20:08:52.780152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.936 [2024-07-24 20:08:52.780159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.936 qpair failed and we were unable to recover it. 00:29:04.936 [2024-07-24 20:08:52.780603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.936 [2024-07-24 20:08:52.780610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.936 qpair failed and we were unable to recover it. 00:29:04.936 [2024-07-24 20:08:52.781046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.936 [2024-07-24 20:08:52.781054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.936 qpair failed and we were unable to recover it. 00:29:04.936 [2024-07-24 20:08:52.781572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.936 [2024-07-24 20:08:52.781599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.936 qpair failed and we were unable to recover it. 00:29:04.936 [2024-07-24 20:08:52.782049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.936 [2024-07-24 20:08:52.782058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.936 qpair failed and we were unable to recover it. 00:29:04.936 [2024-07-24 20:08:52.782442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.936 [2024-07-24 20:08:52.782469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.936 qpair failed and we were unable to recover it. 00:29:04.936 [2024-07-24 20:08:52.782912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.936 [2024-07-24 20:08:52.782920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.936 qpair failed and we were unable to recover it. 00:29:04.936 [2024-07-24 20:08:52.783476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.936 [2024-07-24 20:08:52.783503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.936 qpair failed and we were unable to recover it. 00:29:04.936 [2024-07-24 20:08:52.783920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.936 [2024-07-24 20:08:52.783929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.936 qpair failed and we were unable to recover it. 00:29:04.936 [2024-07-24 20:08:52.784431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.937 [2024-07-24 20:08:52.784459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.937 qpair failed and we were unable to recover it. 00:29:04.937 [2024-07-24 20:08:52.784870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.937 [2024-07-24 20:08:52.784879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.937 qpair failed and we were unable to recover it. 00:29:04.937 [2024-07-24 20:08:52.785327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.937 [2024-07-24 20:08:52.785334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.937 qpair failed and we were unable to recover it. 00:29:04.937 [2024-07-24 20:08:52.785745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.937 [2024-07-24 20:08:52.785753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.937 qpair failed and we were unable to recover it. 00:29:04.937 [2024-07-24 20:08:52.786195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.937 [2024-07-24 20:08:52.786205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.937 qpair failed and we were unable to recover it. 00:29:04.937 [2024-07-24 20:08:52.786638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.937 [2024-07-24 20:08:52.786644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.937 qpair failed and we were unable to recover it. 00:29:04.937 [2024-07-24 20:08:52.787083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.937 [2024-07-24 20:08:52.787089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.937 qpair failed and we were unable to recover it. 00:29:04.937 [2024-07-24 20:08:52.787505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.937 [2024-07-24 20:08:52.787515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.937 qpair failed and we were unable to recover it. 00:29:04.937 [2024-07-24 20:08:52.787953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.937 [2024-07-24 20:08:52.787959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.937 qpair failed and we were unable to recover it. 00:29:04.937 [2024-07-24 20:08:52.788457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.937 [2024-07-24 20:08:52.788485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.937 qpair failed and we were unable to recover it. 00:29:04.937 [2024-07-24 20:08:52.788934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.937 [2024-07-24 20:08:52.788943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.937 qpair failed and we were unable to recover it. 00:29:04.937 [2024-07-24 20:08:52.789466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.937 [2024-07-24 20:08:52.789495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.937 qpair failed and we were unable to recover it. 00:29:04.937 [2024-07-24 20:08:52.789913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.937 [2024-07-24 20:08:52.789921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.937 qpair failed and we were unable to recover it. 00:29:04.937 [2024-07-24 20:08:52.790414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.937 [2024-07-24 20:08:52.790441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.937 qpair failed and we were unable to recover it. 00:29:04.937 [2024-07-24 20:08:52.790762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.937 [2024-07-24 20:08:52.790770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.937 qpair failed and we were unable to recover it. 00:29:04.937 [2024-07-24 20:08:52.791152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.937 [2024-07-24 20:08:52.791159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.937 qpair failed and we were unable to recover it. 00:29:04.937 [2024-07-24 20:08:52.791482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.937 [2024-07-24 20:08:52.791489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.937 qpair failed and we were unable to recover it. 00:29:04.937 [2024-07-24 20:08:52.791913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.937 [2024-07-24 20:08:52.791920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.937 qpair failed and we were unable to recover it. 00:29:04.937 [2024-07-24 20:08:52.792343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.937 [2024-07-24 20:08:52.792350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.937 qpair failed and we were unable to recover it. 00:29:04.937 [2024-07-24 20:08:52.792754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.937 [2024-07-24 20:08:52.792760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.937 qpair failed and we were unable to recover it. 00:29:04.937 [2024-07-24 20:08:52.793218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.937 [2024-07-24 20:08:52.793226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.937 qpair failed and we were unable to recover it. 00:29:04.937 [2024-07-24 20:08:52.793560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.937 [2024-07-24 20:08:52.793567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.937 qpair failed and we were unable to recover it. 00:29:04.937 [2024-07-24 20:08:52.793970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.937 [2024-07-24 20:08:52.793977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.937 qpair failed and we were unable to recover it. 00:29:04.937 [2024-07-24 20:08:52.794297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.937 [2024-07-24 20:08:52.794304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.937 qpair failed and we were unable to recover it. 00:29:04.937 [2024-07-24 20:08:52.794711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.937 [2024-07-24 20:08:52.794718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.937 qpair failed and we were unable to recover it. 00:29:04.937 [2024-07-24 20:08:52.795122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.937 [2024-07-24 20:08:52.795128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.937 qpair failed and we were unable to recover it. 00:29:04.937 [2024-07-24 20:08:52.795354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.937 [2024-07-24 20:08:52.795365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.937 qpair failed and we were unable to recover it. 00:29:04.937 [2024-07-24 20:08:52.795642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.937 [2024-07-24 20:08:52.795649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.937 qpair failed and we were unable to recover it. 00:29:04.937 [2024-07-24 20:08:52.795824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.937 [2024-07-24 20:08:52.795831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.937 qpair failed and we were unable to recover it. 00:29:04.937 [2024-07-24 20:08:52.796309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.937 [2024-07-24 20:08:52.796316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.937 qpair failed and we were unable to recover it. 00:29:04.937 [2024-07-24 20:08:52.796728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.937 [2024-07-24 20:08:52.796734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.937 qpair failed and we were unable to recover it. 00:29:04.937 [2024-07-24 20:08:52.797137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.937 [2024-07-24 20:08:52.797144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.937 qpair failed and we were unable to recover it. 00:29:04.937 [2024-07-24 20:08:52.797349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.937 [2024-07-24 20:08:52.797357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.937 qpair failed and we were unable to recover it. 00:29:04.937 [2024-07-24 20:08:52.797765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.937 [2024-07-24 20:08:52.797773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.937 qpair failed and we were unable to recover it. 00:29:04.937 [2024-07-24 20:08:52.798231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.937 [2024-07-24 20:08:52.798239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.938 qpair failed and we were unable to recover it. 00:29:04.938 [2024-07-24 20:08:52.798660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.938 [2024-07-24 20:08:52.798666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.938 qpair failed and we were unable to recover it. 00:29:04.938 [2024-07-24 20:08:52.798975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.938 [2024-07-24 20:08:52.798982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.938 qpair failed and we were unable to recover it. 00:29:04.938 [2024-07-24 20:08:52.799205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.938 [2024-07-24 20:08:52.799214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.938 qpair failed and we were unable to recover it. 00:29:04.938 [2024-07-24 20:08:52.799635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.938 [2024-07-24 20:08:52.799642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.938 qpair failed and we were unable to recover it. 00:29:04.938 [2024-07-24 20:08:52.799963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.938 [2024-07-24 20:08:52.799970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.938 qpair failed and we were unable to recover it. 00:29:04.938 [2024-07-24 20:08:52.800258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.938 [2024-07-24 20:08:52.800265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.938 qpair failed and we were unable to recover it. 00:29:04.938 [2024-07-24 20:08:52.800687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.938 [2024-07-24 20:08:52.800693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.938 qpair failed and we were unable to recover it. 00:29:04.938 [2024-07-24 20:08:52.801119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.938 [2024-07-24 20:08:52.801126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.938 qpair failed and we were unable to recover it. 00:29:04.938 [2024-07-24 20:08:52.801441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.938 [2024-07-24 20:08:52.801449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.938 qpair failed and we were unable to recover it. 00:29:04.938 [2024-07-24 20:08:52.801867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.938 [2024-07-24 20:08:52.801874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.938 qpair failed and we were unable to recover it. 00:29:04.938 [2024-07-24 20:08:52.802333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.938 [2024-07-24 20:08:52.802339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.938 qpair failed and we were unable to recover it. 00:29:04.938 [2024-07-24 20:08:52.802646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.938 [2024-07-24 20:08:52.802658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.938 qpair failed and we were unable to recover it. 00:29:04.938 [2024-07-24 20:08:52.803084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.938 [2024-07-24 20:08:52.803093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.938 qpair failed and we were unable to recover it. 00:29:04.938 [2024-07-24 20:08:52.803505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.938 [2024-07-24 20:08:52.803512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.938 qpair failed and we were unable to recover it. 00:29:04.938 [2024-07-24 20:08:52.803920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.938 [2024-07-24 20:08:52.803926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.938 qpair failed and we were unable to recover it. 00:29:04.938 [2024-07-24 20:08:52.804330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.938 [2024-07-24 20:08:52.804337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.938 qpair failed and we were unable to recover it. 00:29:04.938 [2024-07-24 20:08:52.804770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.938 [2024-07-24 20:08:52.804776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.938 qpair failed and we were unable to recover it. 00:29:04.938 [2024-07-24 20:08:52.805187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.938 [2024-07-24 20:08:52.805193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.938 qpair failed and we were unable to recover it. 00:29:04.938 [2024-07-24 20:08:52.805609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.938 [2024-07-24 20:08:52.805617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.938 qpair failed and we were unable to recover it. 00:29:04.938 [2024-07-24 20:08:52.806023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.938 [2024-07-24 20:08:52.806030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.938 qpair failed and we were unable to recover it. 00:29:04.938 [2024-07-24 20:08:52.806531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.938 [2024-07-24 20:08:52.806559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.938 qpair failed and we were unable to recover it. 00:29:04.938 [2024-07-24 20:08:52.806980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.938 [2024-07-24 20:08:52.806989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.938 qpair failed and we were unable to recover it. 00:29:04.938 [2024-07-24 20:08:52.807449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.938 [2024-07-24 20:08:52.807477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.938 qpair failed and we were unable to recover it. 00:29:04.938 [2024-07-24 20:08:52.807895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.938 [2024-07-24 20:08:52.807904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.938 qpair failed and we were unable to recover it. 00:29:04.938 [2024-07-24 20:08:52.808416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.938 [2024-07-24 20:08:52.808443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.938 qpair failed and we were unable to recover it. 00:29:04.938 [2024-07-24 20:08:52.808866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.938 [2024-07-24 20:08:52.808876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.938 qpair failed and we were unable to recover it. 00:29:04.938 [2024-07-24 20:08:52.809290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.938 [2024-07-24 20:08:52.809298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.938 qpair failed and we were unable to recover it. 00:29:04.938 [2024-07-24 20:08:52.809726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.938 [2024-07-24 20:08:52.809733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.938 qpair failed and we were unable to recover it. 00:29:04.938 [2024-07-24 20:08:52.810144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.938 [2024-07-24 20:08:52.810151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.938 qpair failed and we were unable to recover it. 00:29:04.938 [2024-07-24 20:08:52.810479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.938 [2024-07-24 20:08:52.810486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.938 qpair failed and we were unable to recover it. 00:29:04.938 [2024-07-24 20:08:52.810784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.938 [2024-07-24 20:08:52.810790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.938 qpair failed and we were unable to recover it. 00:29:04.938 [2024-07-24 20:08:52.811187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.938 [2024-07-24 20:08:52.811194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.938 qpair failed and we were unable to recover it. 00:29:04.938 [2024-07-24 20:08:52.811628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.938 [2024-07-24 20:08:52.811636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.938 qpair failed and we were unable to recover it. 00:29:04.938 [2024-07-24 20:08:52.812043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.938 [2024-07-24 20:08:52.812050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.938 qpair failed and we were unable to recover it. 00:29:04.938 [2024-07-24 20:08:52.812418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.938 [2024-07-24 20:08:52.812446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.938 qpair failed and we were unable to recover it. 00:29:04.938 [2024-07-24 20:08:52.812827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.938 [2024-07-24 20:08:52.812836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.938 qpair failed and we were unable to recover it. 00:29:04.939 [2024-07-24 20:08:52.813262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.939 [2024-07-24 20:08:52.813269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.939 qpair failed and we were unable to recover it. 00:29:04.939 [2024-07-24 20:08:52.813683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.939 [2024-07-24 20:08:52.813690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.939 qpair failed and we were unable to recover it. 00:29:04.939 [2024-07-24 20:08:52.813892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.939 [2024-07-24 20:08:52.813901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.939 qpair failed and we were unable to recover it. 00:29:04.939 [2024-07-24 20:08:52.814360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.939 [2024-07-24 20:08:52.814368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.939 qpair failed and we were unable to recover it. 00:29:04.939 [2024-07-24 20:08:52.814845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.939 [2024-07-24 20:08:52.814852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.939 qpair failed and we were unable to recover it. 00:29:04.939 [2024-07-24 20:08:52.815322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.939 [2024-07-24 20:08:52.815330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.939 qpair failed and we were unable to recover it. 00:29:04.939 [2024-07-24 20:08:52.815774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.939 [2024-07-24 20:08:52.815780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.939 qpair failed and we were unable to recover it. 00:29:04.939 [2024-07-24 20:08:52.816207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.939 [2024-07-24 20:08:52.816215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.939 qpair failed and we were unable to recover it. 00:29:04.939 [2024-07-24 20:08:52.816642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.939 [2024-07-24 20:08:52.816649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.939 qpair failed and we were unable to recover it. 00:29:04.939 [2024-07-24 20:08:52.817101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.939 [2024-07-24 20:08:52.817108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.939 qpair failed and we were unable to recover it. 00:29:04.939 [2024-07-24 20:08:52.817524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.939 [2024-07-24 20:08:52.817531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.939 qpair failed and we were unable to recover it. 00:29:04.939 [2024-07-24 20:08:52.817935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.939 [2024-07-24 20:08:52.817942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.939 qpair failed and we were unable to recover it. 00:29:04.939 [2024-07-24 20:08:52.818468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.939 [2024-07-24 20:08:52.818496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.939 qpair failed and we were unable to recover it. 00:29:04.939 [2024-07-24 20:08:52.818904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.939 [2024-07-24 20:08:52.818913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.939 qpair failed and we were unable to recover it. 00:29:04.939 [2024-07-24 20:08:52.819258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.939 [2024-07-24 20:08:52.819266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.939 qpair failed and we were unable to recover it. 00:29:04.939 [2024-07-24 20:08:52.819677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.939 [2024-07-24 20:08:52.819685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.939 qpair failed and we were unable to recover it. 00:29:04.939 [2024-07-24 20:08:52.820108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.939 [2024-07-24 20:08:52.820118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.939 qpair failed and we were unable to recover it. 00:29:04.939 [2024-07-24 20:08:52.820531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.939 [2024-07-24 20:08:52.820538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.939 qpair failed and we were unable to recover it. 00:29:04.939 [2024-07-24 20:08:52.820981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.939 [2024-07-24 20:08:52.820987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.939 qpair failed and we were unable to recover it. 00:29:04.939 [2024-07-24 20:08:52.821397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.939 [2024-07-24 20:08:52.821404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.939 qpair failed and we were unable to recover it. 00:29:04.939 [2024-07-24 20:08:52.821715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.939 [2024-07-24 20:08:52.821723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.939 qpair failed and we were unable to recover it. 00:29:04.939 [2024-07-24 20:08:52.822184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.939 [2024-07-24 20:08:52.822190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.939 qpair failed and we were unable to recover it. 00:29:04.939 [2024-07-24 20:08:52.822612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.939 [2024-07-24 20:08:52.822619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.939 qpair failed and we were unable to recover it. 00:29:04.939 [2024-07-24 20:08:52.822900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.939 [2024-07-24 20:08:52.822908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.939 qpair failed and we were unable to recover it. 00:29:04.939 [2024-07-24 20:08:52.823354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.939 [2024-07-24 20:08:52.823361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.939 qpair failed and we were unable to recover it. 00:29:04.939 [2024-07-24 20:08:52.823643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.939 [2024-07-24 20:08:52.823651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.939 qpair failed and we were unable to recover it. 00:29:04.939 [2024-07-24 20:08:52.824082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.939 [2024-07-24 20:08:52.824089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.939 qpair failed and we were unable to recover it. 00:29:04.939 [2024-07-24 20:08:52.824532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.939 [2024-07-24 20:08:52.824539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.939 qpair failed and we were unable to recover it. 00:29:04.939 [2024-07-24 20:08:52.824946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.940 [2024-07-24 20:08:52.824952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-07-24 20:08:52.825357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.940 [2024-07-24 20:08:52.825364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-07-24 20:08:52.825776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.940 [2024-07-24 20:08:52.825782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-07-24 20:08:52.826184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.940 [2024-07-24 20:08:52.826191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-07-24 20:08:52.826479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.940 [2024-07-24 20:08:52.826486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-07-24 20:08:52.826929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.940 [2024-07-24 20:08:52.826936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-07-24 20:08:52.827269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.940 [2024-07-24 20:08:52.827276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-07-24 20:08:52.827560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.940 [2024-07-24 20:08:52.827566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-07-24 20:08:52.827992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.940 [2024-07-24 20:08:52.827999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-07-24 20:08:52.828428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.940 [2024-07-24 20:08:52.828435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-07-24 20:08:52.828708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.940 [2024-07-24 20:08:52.828714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-07-24 20:08:52.829149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.940 [2024-07-24 20:08:52.829156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-07-24 20:08:52.829581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.940 [2024-07-24 20:08:52.829589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-07-24 20:08:52.829901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.940 [2024-07-24 20:08:52.829908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-07-24 20:08:52.830332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.940 [2024-07-24 20:08:52.830339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-07-24 20:08:52.830789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.940 [2024-07-24 20:08:52.830796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-07-24 20:08:52.831239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.940 [2024-07-24 20:08:52.831247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-07-24 20:08:52.831696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.940 [2024-07-24 20:08:52.831703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-07-24 20:08:52.832164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.940 [2024-07-24 20:08:52.832170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-07-24 20:08:52.832678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.940 [2024-07-24 20:08:52.832685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-07-24 20:08:52.833160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.940 [2024-07-24 20:08:52.833166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-07-24 20:08:52.833620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.940 [2024-07-24 20:08:52.833628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-07-24 20:08:52.834097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.940 [2024-07-24 20:08:52.834103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-07-24 20:08:52.834564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.940 [2024-07-24 20:08:52.834571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-07-24 20:08:52.834984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.940 [2024-07-24 20:08:52.834990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-07-24 20:08:52.835459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.940 [2024-07-24 20:08:52.835487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-07-24 20:08:52.835921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.940 [2024-07-24 20:08:52.835931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-07-24 20:08:52.836446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.940 [2024-07-24 20:08:52.836474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-07-24 20:08:52.836922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.940 [2024-07-24 20:08:52.836934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-07-24 20:08:52.837444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.940 [2024-07-24 20:08:52.837472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-07-24 20:08:52.837897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.940 [2024-07-24 20:08:52.837906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-07-24 20:08:52.838410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.940 [2024-07-24 20:08:52.838437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-07-24 20:08:52.838895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.940 [2024-07-24 20:08:52.838904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.940 [2024-07-24 20:08:52.839142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.940 [2024-07-24 20:08:52.839150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.940 qpair failed and we were unable to recover it. 00:29:04.941 [2024-07-24 20:08:52.839444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.941 [2024-07-24 20:08:52.839451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.941 qpair failed and we were unable to recover it. 00:29:04.941 [2024-07-24 20:08:52.839786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.941 [2024-07-24 20:08:52.839793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.941 qpair failed and we were unable to recover it. 00:29:04.941 [2024-07-24 20:08:52.840205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.941 [2024-07-24 20:08:52.840213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.941 qpair failed and we were unable to recover it. 00:29:04.941 [2024-07-24 20:08:52.840626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.941 [2024-07-24 20:08:52.840634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.941 qpair failed and we were unable to recover it. 00:29:04.941 [2024-07-24 20:08:52.841058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.941 [2024-07-24 20:08:52.841065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.941 qpair failed and we were unable to recover it. 00:29:04.941 [2024-07-24 20:08:52.841557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.941 [2024-07-24 20:08:52.841584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.941 qpair failed and we were unable to recover it. 00:29:04.941 [2024-07-24 20:08:52.841899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.941 [2024-07-24 20:08:52.841908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.941 qpair failed and we were unable to recover it. 00:29:04.941 [2024-07-24 20:08:52.842463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.941 [2024-07-24 20:08:52.842491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.941 qpair failed and we were unable to recover it. 00:29:04.941 [2024-07-24 20:08:52.842908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.941 [2024-07-24 20:08:52.842916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.941 qpair failed and we were unable to recover it. 00:29:04.941 [2024-07-24 20:08:52.843429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.941 [2024-07-24 20:08:52.843457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.941 qpair failed and we were unable to recover it. 00:29:04.941 [2024-07-24 20:08:52.843955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.941 [2024-07-24 20:08:52.843965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.941 qpair failed and we were unable to recover it. 00:29:04.941 [2024-07-24 20:08:52.844464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.941 [2024-07-24 20:08:52.844491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.941 qpair failed and we were unable to recover it. 00:29:04.941 [2024-07-24 20:08:52.844907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.941 [2024-07-24 20:08:52.844915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.941 qpair failed and we were unable to recover it. 00:29:04.941 [2024-07-24 20:08:52.845240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.941 [2024-07-24 20:08:52.845248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.941 qpair failed and we were unable to recover it. 00:29:04.941 [2024-07-24 20:08:52.845683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.941 [2024-07-24 20:08:52.845690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.941 qpair failed and we were unable to recover it. 00:29:04.941 [2024-07-24 20:08:52.846139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.941 [2024-07-24 20:08:52.846146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.941 qpair failed and we were unable to recover it. 00:29:04.941 [2024-07-24 20:08:52.846581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.941 [2024-07-24 20:08:52.846588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.941 qpair failed and we were unable to recover it. 00:29:04.941 [2024-07-24 20:08:52.846996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.941 [2024-07-24 20:08:52.847003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.941 qpair failed and we were unable to recover it. 00:29:04.941 [2024-07-24 20:08:52.847222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.941 [2024-07-24 20:08:52.847237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.941 qpair failed and we were unable to recover it. 00:29:04.941 [2024-07-24 20:08:52.847691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.941 [2024-07-24 20:08:52.847699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.941 qpair failed and we were unable to recover it. 00:29:04.941 [2024-07-24 20:08:52.847901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.941 [2024-07-24 20:08:52.847911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.941 qpair failed and we were unable to recover it. 00:29:04.941 [2024-07-24 20:08:52.848348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.941 [2024-07-24 20:08:52.848356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.941 qpair failed and we were unable to recover it. 00:29:04.941 [2024-07-24 20:08:52.848534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.941 [2024-07-24 20:08:52.848543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.941 qpair failed and we were unable to recover it. 00:29:04.941 [2024-07-24 20:08:52.849000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.941 [2024-07-24 20:08:52.849007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.941 qpair failed and we were unable to recover it. 00:29:04.941 [2024-07-24 20:08:52.849427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.941 [2024-07-24 20:08:52.849435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.941 qpair failed and we were unable to recover it. 00:29:04.941 [2024-07-24 20:08:52.849918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.941 [2024-07-24 20:08:52.849925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.941 qpair failed and we were unable to recover it. 00:29:04.941 [2024-07-24 20:08:52.850339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.941 [2024-07-24 20:08:52.850346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.941 qpair failed and we were unable to recover it. 00:29:04.941 [2024-07-24 20:08:52.850770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.941 [2024-07-24 20:08:52.850777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.941 qpair failed and we were unable to recover it. 00:29:04.941 [2024-07-24 20:08:52.851198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.941 [2024-07-24 20:08:52.851208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.941 qpair failed and we were unable to recover it. 00:29:04.941 [2024-07-24 20:08:52.851412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.941 [2024-07-24 20:08:52.851421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.941 qpair failed and we were unable to recover it. 00:29:04.941 [2024-07-24 20:08:52.851873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.941 [2024-07-24 20:08:52.851881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.941 qpair failed and we were unable to recover it. 00:29:04.941 [2024-07-24 20:08:52.852333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.941 [2024-07-24 20:08:52.852342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.941 qpair failed and we were unable to recover it. 00:29:04.941 [2024-07-24 20:08:52.852798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.941 [2024-07-24 20:08:52.852806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.941 qpair failed and we were unable to recover it. 00:29:04.941 [2024-07-24 20:08:52.853304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.941 [2024-07-24 20:08:52.853311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.941 qpair failed and we were unable to recover it. 00:29:04.941 [2024-07-24 20:08:52.853618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.941 [2024-07-24 20:08:52.853628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.941 qpair failed and we were unable to recover it. 00:29:04.941 [2024-07-24 20:08:52.854110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.941 [2024-07-24 20:08:52.854117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.941 qpair failed and we were unable to recover it. 00:29:04.941 [2024-07-24 20:08:52.854517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.941 [2024-07-24 20:08:52.854523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.941 qpair failed and we were unable to recover it. 00:29:04.941 [2024-07-24 20:08:52.854967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.942 [2024-07-24 20:08:52.854975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.942 qpair failed and we were unable to recover it. 00:29:04.942 [2024-07-24 20:08:52.855374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.942 [2024-07-24 20:08:52.855381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.942 qpair failed and we were unable to recover it. 00:29:04.942 [2024-07-24 20:08:52.855866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.942 [2024-07-24 20:08:52.855873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.942 qpair failed and we were unable to recover it. 00:29:04.942 [2024-07-24 20:08:52.856292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.942 [2024-07-24 20:08:52.856299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.942 qpair failed and we were unable to recover it. 00:29:04.942 [2024-07-24 20:08:52.856697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.942 [2024-07-24 20:08:52.856704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.942 qpair failed and we were unable to recover it. 00:29:04.942 [2024-07-24 20:08:52.857106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.942 [2024-07-24 20:08:52.857113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.942 qpair failed and we were unable to recover it. 00:29:04.942 [2024-07-24 20:08:52.857533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.942 [2024-07-24 20:08:52.857540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.942 qpair failed and we were unable to recover it. 00:29:04.942 [2024-07-24 20:08:52.857949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.942 [2024-07-24 20:08:52.857956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.942 qpair failed and we were unable to recover it. 00:29:04.942 [2024-07-24 20:08:52.858369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.942 [2024-07-24 20:08:52.858376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.942 qpair failed and we were unable to recover it. 00:29:04.942 [2024-07-24 20:08:52.858781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.942 [2024-07-24 20:08:52.858788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.942 qpair failed and we were unable to recover it. 00:29:04.942 [2024-07-24 20:08:52.859190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.942 [2024-07-24 20:08:52.859198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.942 qpair failed and we were unable to recover it. 00:29:04.942 [2024-07-24 20:08:52.859637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.942 [2024-07-24 20:08:52.859645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.942 qpair failed and we were unable to recover it. 00:29:04.942 [2024-07-24 20:08:52.859985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.942 [2024-07-24 20:08:52.859993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.942 qpair failed and we were unable to recover it. 00:29:04.942 [2024-07-24 20:08:52.860508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.942 [2024-07-24 20:08:52.860535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.942 qpair failed and we were unable to recover it. 00:29:04.942 [2024-07-24 20:08:52.860953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.942 [2024-07-24 20:08:52.860962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.942 qpair failed and we were unable to recover it. 00:29:04.942 [2024-07-24 20:08:52.861497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.942 [2024-07-24 20:08:52.861524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.942 qpair failed and we were unable to recover it. 00:29:04.942 [2024-07-24 20:08:52.861946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.942 [2024-07-24 20:08:52.861955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.942 qpair failed and we were unable to recover it. 00:29:04.942 [2024-07-24 20:08:52.862473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.942 [2024-07-24 20:08:52.862501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.942 qpair failed and we were unable to recover it. 00:29:04.942 [2024-07-24 20:08:52.862917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.942 [2024-07-24 20:08:52.862925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.942 qpair failed and we were unable to recover it. 00:29:04.942 [2024-07-24 20:08:52.863433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.942 [2024-07-24 20:08:52.863461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.942 qpair failed and we were unable to recover it. 00:29:04.942 [2024-07-24 20:08:52.864398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.942 [2024-07-24 20:08:52.864416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.942 qpair failed and we were unable to recover it. 00:29:04.942 [2024-07-24 20:08:52.864820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.942 [2024-07-24 20:08:52.864828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.942 qpair failed and we were unable to recover it. 00:29:04.942 [2024-07-24 20:08:52.865266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.942 [2024-07-24 20:08:52.865276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.942 qpair failed and we were unable to recover it. 00:29:04.942 [2024-07-24 20:08:52.865678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.942 [2024-07-24 20:08:52.865685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.942 qpair failed and we were unable to recover it. 00:29:04.942 [2024-07-24 20:08:52.866125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.942 [2024-07-24 20:08:52.866133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.942 qpair failed and we were unable to recover it. 00:29:04.942 [2024-07-24 20:08:52.866555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.942 [2024-07-24 20:08:52.866562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.942 qpair failed and we were unable to recover it. 00:29:04.942 [2024-07-24 20:08:52.866968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.942 [2024-07-24 20:08:52.866976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.942 qpair failed and we were unable to recover it. 00:29:04.942 [2024-07-24 20:08:52.867382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.942 [2024-07-24 20:08:52.867389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.942 qpair failed and we were unable to recover it. 00:29:04.942 [2024-07-24 20:08:52.867800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.942 [2024-07-24 20:08:52.867806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.942 qpair failed and we were unable to recover it. 00:29:04.942 [2024-07-24 20:08:52.868245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.942 [2024-07-24 20:08:52.868252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.942 qpair failed and we were unable to recover it. 00:29:04.942 [2024-07-24 20:08:52.868671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.942 [2024-07-24 20:08:52.868678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.942 qpair failed and we were unable to recover it. 00:29:04.942 [2024-07-24 20:08:52.869078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.942 [2024-07-24 20:08:52.869085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:04.942 qpair failed and we were unable to recover it. 00:29:05.213 [2024-07-24 20:08:52.869492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.213 [2024-07-24 20:08:52.869502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.213 qpair failed and we were unable to recover it. 00:29:05.213 [2024-07-24 20:08:52.869939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.213 [2024-07-24 20:08:52.869946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.213 qpair failed and we were unable to recover it. 00:29:05.213 [2024-07-24 20:08:52.870387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.213 [2024-07-24 20:08:52.870394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.213 qpair failed and we were unable to recover it. 00:29:05.213 [2024-07-24 20:08:52.870816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.213 [2024-07-24 20:08:52.870822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.213 qpair failed and we were unable to recover it. 00:29:05.213 [2024-07-24 20:08:52.871222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.213 [2024-07-24 20:08:52.871230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.213 qpair failed and we were unable to recover it. 00:29:05.213 [2024-07-24 20:08:52.871652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.213 [2024-07-24 20:08:52.871665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.213 qpair failed and we were unable to recover it. 00:29:05.213 [2024-07-24 20:08:52.872067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.213 [2024-07-24 20:08:52.872074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.213 qpair failed and we were unable to recover it. 00:29:05.213 [2024-07-24 20:08:52.872545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.213 [2024-07-24 20:08:52.872552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.213 qpair failed and we were unable to recover it. 00:29:05.213 [2024-07-24 20:08:52.872951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.213 [2024-07-24 20:08:52.872958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.213 qpair failed and we were unable to recover it. 00:29:05.213 [2024-07-24 20:08:52.873488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.213 [2024-07-24 20:08:52.873516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.213 qpair failed and we were unable to recover it. 00:29:05.213 [2024-07-24 20:08:52.873933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.213 [2024-07-24 20:08:52.873942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.213 qpair failed and we were unable to recover it. 00:29:05.213 [2024-07-24 20:08:52.874435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.213 [2024-07-24 20:08:52.874463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.213 qpair failed and we were unable to recover it. 00:29:05.213 [2024-07-24 20:08:52.874896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.213 [2024-07-24 20:08:52.874905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.213 qpair failed and we were unable to recover it. 00:29:05.213 [2024-07-24 20:08:52.875362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.213 [2024-07-24 20:08:52.875370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.213 qpair failed and we were unable to recover it. 00:29:05.213 [2024-07-24 20:08:52.875692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.213 [2024-07-24 20:08:52.875699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.213 qpair failed and we were unable to recover it. 00:29:05.213 [2024-07-24 20:08:52.876120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.213 [2024-07-24 20:08:52.876126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.213 qpair failed and we were unable to recover it. 00:29:05.213 [2024-07-24 20:08:52.876534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.213 [2024-07-24 20:08:52.876541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.213 qpair failed and we were unable to recover it. 00:29:05.213 [2024-07-24 20:08:52.876950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.213 [2024-07-24 20:08:52.876957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.213 qpair failed and we were unable to recover it. 00:29:05.213 [2024-07-24 20:08:52.877359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.213 [2024-07-24 20:08:52.877367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.213 qpair failed and we were unable to recover it. 00:29:05.213 [2024-07-24 20:08:52.877802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.213 [2024-07-24 20:08:52.877809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.213 qpair failed and we were unable to recover it. 00:29:05.213 [2024-07-24 20:08:52.878256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.213 [2024-07-24 20:08:52.878264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.213 qpair failed and we were unable to recover it. 00:29:05.214 [2024-07-24 20:08:52.878708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.214 [2024-07-24 20:08:52.878715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.214 qpair failed and we were unable to recover it. 00:29:05.214 [2024-07-24 20:08:52.879115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.214 [2024-07-24 20:08:52.879122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.214 qpair failed and we were unable to recover it. 00:29:05.214 [2024-07-24 20:08:52.879555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.214 [2024-07-24 20:08:52.879563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.214 qpair failed and we were unable to recover it. 00:29:05.214 [2024-07-24 20:08:52.879988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.214 [2024-07-24 20:08:52.879995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.214 qpair failed and we were unable to recover it. 00:29:05.214 [2024-07-24 20:08:52.880398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.214 [2024-07-24 20:08:52.880406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.214 qpair failed and we were unable to recover it. 00:29:05.214 [2024-07-24 20:08:52.880853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.214 [2024-07-24 20:08:52.880859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.214 qpair failed and we were unable to recover it. 00:29:05.214 [2024-07-24 20:08:52.881259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.214 [2024-07-24 20:08:52.881266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.214 qpair failed and we were unable to recover it. 00:29:05.214 [2024-07-24 20:08:52.881688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.214 [2024-07-24 20:08:52.881695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.214 qpair failed and we were unable to recover it. 00:29:05.214 [2024-07-24 20:08:52.882141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.214 [2024-07-24 20:08:52.882147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.214 qpair failed and we were unable to recover it. 00:29:05.214 [2024-07-24 20:08:52.882461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.214 [2024-07-24 20:08:52.882468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.214 qpair failed and we were unable to recover it. 00:29:05.214 [2024-07-24 20:08:52.882887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.214 [2024-07-24 20:08:52.882893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.214 qpair failed and we were unable to recover it. 00:29:05.214 [2024-07-24 20:08:52.883207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.214 [2024-07-24 20:08:52.883214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.214 qpair failed and we were unable to recover it. 00:29:05.214 [2024-07-24 20:08:52.883675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.214 [2024-07-24 20:08:52.883683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.214 qpair failed and we were unable to recover it. 00:29:05.214 [2024-07-24 20:08:52.884122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.214 [2024-07-24 20:08:52.884128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.214 qpair failed and we were unable to recover it. 00:29:05.214 [2024-07-24 20:08:52.884523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.214 [2024-07-24 20:08:52.884530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.214 qpair failed and we were unable to recover it. 00:29:05.214 [2024-07-24 20:08:52.884941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.214 [2024-07-24 20:08:52.884948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.214 qpair failed and we were unable to recover it. 00:29:05.214 [2024-07-24 20:08:52.885476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.214 [2024-07-24 20:08:52.885505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.214 qpair failed and we were unable to recover it. 00:29:05.214 [2024-07-24 20:08:52.885951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.214 [2024-07-24 20:08:52.885960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.214 qpair failed and we were unable to recover it. 00:29:05.214 [2024-07-24 20:08:52.886379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.214 [2024-07-24 20:08:52.886407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.214 qpair failed and we were unable to recover it. 00:29:05.214 [2024-07-24 20:08:52.886880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.214 [2024-07-24 20:08:52.886890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.214 qpair failed and we were unable to recover it. 00:29:05.214 [2024-07-24 20:08:52.887476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.214 [2024-07-24 20:08:52.887504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.214 qpair failed and we were unable to recover it. 00:29:05.214 [2024-07-24 20:08:52.887918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.214 [2024-07-24 20:08:52.887927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.214 qpair failed and we were unable to recover it. 00:29:05.214 [2024-07-24 20:08:52.888324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.214 [2024-07-24 20:08:52.888332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.214 qpair failed and we were unable to recover it. 00:29:05.214 [2024-07-24 20:08:52.888731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.214 [2024-07-24 20:08:52.888737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.214 qpair failed and we were unable to recover it. 00:29:05.214 [2024-07-24 20:08:52.889196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.214 [2024-07-24 20:08:52.889211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.214 qpair failed and we were unable to recover it. 00:29:05.214 [2024-07-24 20:08:52.889639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.214 [2024-07-24 20:08:52.889648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.214 qpair failed and we were unable to recover it. 00:29:05.214 [2024-07-24 20:08:52.890071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.214 [2024-07-24 20:08:52.890078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.214 qpair failed and we were unable to recover it. 00:29:05.214 [2024-07-24 20:08:52.890615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.214 [2024-07-24 20:08:52.890642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.214 qpair failed and we were unable to recover it. 00:29:05.214 [2024-07-24 20:08:52.891117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.214 [2024-07-24 20:08:52.891126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.214 qpair failed and we were unable to recover it. 00:29:05.214 [2024-07-24 20:08:52.891621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.214 [2024-07-24 20:08:52.891650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.214 qpair failed and we were unable to recover it. 00:29:05.214 [2024-07-24 20:08:52.892076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.214 [2024-07-24 20:08:52.892084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.214 qpair failed and we were unable to recover it. 00:29:05.214 [2024-07-24 20:08:52.892488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.214 [2024-07-24 20:08:52.892495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.214 qpair failed and we were unable to recover it. 00:29:05.214 [2024-07-24 20:08:52.892936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.214 [2024-07-24 20:08:52.892943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.214 qpair failed and we were unable to recover it. 00:29:05.214 [2024-07-24 20:08:52.893428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.214 [2024-07-24 20:08:52.893455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.214 qpair failed and we were unable to recover it. 00:29:05.214 [2024-07-24 20:08:52.893880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.214 [2024-07-24 20:08:52.893889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.214 qpair failed and we were unable to recover it. 00:29:05.214 [2024-07-24 20:08:52.894293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.214 [2024-07-24 20:08:52.894300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.214 qpair failed and we were unable to recover it. 00:29:05.215 [2024-07-24 20:08:52.894790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.215 [2024-07-24 20:08:52.894798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.215 qpair failed and we were unable to recover it. 00:29:05.215 [2024-07-24 20:08:52.895216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.215 [2024-07-24 20:08:52.895224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.215 qpair failed and we were unable to recover it. 00:29:05.215 [2024-07-24 20:08:52.895655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.215 [2024-07-24 20:08:52.895663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.215 qpair failed and we were unable to recover it. 00:29:05.215 [2024-07-24 20:08:52.896090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.215 [2024-07-24 20:08:52.896098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.215 qpair failed and we were unable to recover it. 00:29:05.215 [2024-07-24 20:08:52.896430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.215 [2024-07-24 20:08:52.896439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.215 qpair failed and we were unable to recover it. 00:29:05.215 [2024-07-24 20:08:52.896847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.215 [2024-07-24 20:08:52.896853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.215 qpair failed and we were unable to recover it. 00:29:05.215 [2024-07-24 20:08:52.897143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.215 [2024-07-24 20:08:52.897149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.215 qpair failed and we were unable to recover it. 00:29:05.215 [2024-07-24 20:08:52.897564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.215 [2024-07-24 20:08:52.897571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.215 qpair failed and we were unable to recover it. 00:29:05.215 [2024-07-24 20:08:52.898013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.215 [2024-07-24 20:08:52.898021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.215 qpair failed and we were unable to recover it. 00:29:05.215 [2024-07-24 20:08:52.898424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.215 [2024-07-24 20:08:52.898431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.215 qpair failed and we were unable to recover it. 00:29:05.215 [2024-07-24 20:08:52.898854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.215 [2024-07-24 20:08:52.898860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.215 qpair failed and we were unable to recover it. 00:29:05.215 [2024-07-24 20:08:52.899298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.215 [2024-07-24 20:08:52.899305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.215 qpair failed and we were unable to recover it. 00:29:05.215 [2024-07-24 20:08:52.899526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.215 [2024-07-24 20:08:52.899536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.215 qpair failed and we were unable to recover it. 00:29:05.215 [2024-07-24 20:08:52.899992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.215 [2024-07-24 20:08:52.899999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.215 qpair failed and we were unable to recover it. 00:29:05.215 [2024-07-24 20:08:52.900407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.215 [2024-07-24 20:08:52.900414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.215 qpair failed and we were unable to recover it. 00:29:05.215 [2024-07-24 20:08:52.900620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.215 [2024-07-24 20:08:52.900628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.215 qpair failed and we were unable to recover it. 00:29:05.215 [2024-07-24 20:08:52.901113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.215 [2024-07-24 20:08:52.901120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.215 qpair failed and we were unable to recover it. 00:29:05.215 [2024-07-24 20:08:52.901443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.215 [2024-07-24 20:08:52.901451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.215 qpair failed and we were unable to recover it. 00:29:05.215 [2024-07-24 20:08:52.901885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.215 [2024-07-24 20:08:52.901893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.215 qpair failed and we were unable to recover it. 00:29:05.215 [2024-07-24 20:08:52.902316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.215 [2024-07-24 20:08:52.902323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.215 qpair failed and we were unable to recover it. 00:29:05.215 [2024-07-24 20:08:52.902760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.215 [2024-07-24 20:08:52.902766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.215 qpair failed and we were unable to recover it. 00:29:05.215 [2024-07-24 20:08:52.903102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.215 [2024-07-24 20:08:52.903117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.215 qpair failed and we were unable to recover it. 00:29:05.215 [2024-07-24 20:08:52.903631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.215 [2024-07-24 20:08:52.903638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.215 qpair failed and we were unable to recover it. 00:29:05.215 [2024-07-24 20:08:52.904032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.215 [2024-07-24 20:08:52.904039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.215 qpair failed and we were unable to recover it. 00:29:05.215 [2024-07-24 20:08:52.904260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.215 [2024-07-24 20:08:52.904267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.215 qpair failed and we were unable to recover it. 00:29:05.215 [2024-07-24 20:08:52.904592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.215 [2024-07-24 20:08:52.904598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.215 qpair failed and we were unable to recover it. 00:29:05.215 [2024-07-24 20:08:52.904789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.215 [2024-07-24 20:08:52.904797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.215 qpair failed and we were unable to recover it. 00:29:05.215 [2024-07-24 20:08:52.905187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.215 [2024-07-24 20:08:52.905194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.215 qpair failed and we were unable to recover it. 00:29:05.215 [2024-07-24 20:08:52.905597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.215 [2024-07-24 20:08:52.905608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.215 qpair failed and we were unable to recover it. 00:29:05.215 [2024-07-24 20:08:52.905901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.215 [2024-07-24 20:08:52.905908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.215 qpair failed and we were unable to recover it. 00:29:05.215 [2024-07-24 20:08:52.906117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.215 [2024-07-24 20:08:52.906126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.215 qpair failed and we were unable to recover it. 00:29:05.215 [2024-07-24 20:08:52.906561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.215 [2024-07-24 20:08:52.906569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.215 qpair failed and we were unable to recover it. 00:29:05.215 [2024-07-24 20:08:52.906887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.215 [2024-07-24 20:08:52.906893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.215 qpair failed and we were unable to recover it. 00:29:05.215 [2024-07-24 20:08:52.907298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.215 [2024-07-24 20:08:52.907305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.215 qpair failed and we were unable to recover it. 00:29:05.215 [2024-07-24 20:08:52.907732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.215 [2024-07-24 20:08:52.907739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.215 qpair failed and we were unable to recover it. 00:29:05.215 [2024-07-24 20:08:52.908126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.215 [2024-07-24 20:08:52.908133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.215 qpair failed and we were unable to recover it. 00:29:05.215 [2024-07-24 20:08:52.908556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.215 [2024-07-24 20:08:52.908563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.215 qpair failed and we were unable to recover it. 00:29:05.215 [2024-07-24 20:08:52.908974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.216 [2024-07-24 20:08:52.908980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.216 qpair failed and we were unable to recover it. 00:29:05.216 [2024-07-24 20:08:52.909415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.216 [2024-07-24 20:08:52.909422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.216 qpair failed and we were unable to recover it. 00:29:05.216 [2024-07-24 20:08:52.909820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.216 [2024-07-24 20:08:52.909827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.216 qpair failed and we were unable to recover it. 00:29:05.216 [2024-07-24 20:08:52.910242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.216 [2024-07-24 20:08:52.910250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.216 qpair failed and we were unable to recover it. 00:29:05.216 [2024-07-24 20:08:52.910639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.216 [2024-07-24 20:08:52.910646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.216 qpair failed and we were unable to recover it. 00:29:05.216 [2024-07-24 20:08:52.911054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.216 [2024-07-24 20:08:52.911061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.216 qpair failed and we were unable to recover it. 00:29:05.216 [2024-07-24 20:08:52.911549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.216 [2024-07-24 20:08:52.911555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.216 qpair failed and we were unable to recover it. 00:29:05.216 [2024-07-24 20:08:52.912000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.216 [2024-07-24 20:08:52.912007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.216 qpair failed and we were unable to recover it. 00:29:05.216 [2024-07-24 20:08:52.912552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.216 [2024-07-24 20:08:52.912580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.216 qpair failed and we were unable to recover it. 00:29:05.216 [2024-07-24 20:08:52.913052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.216 [2024-07-24 20:08:52.913060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.216 qpair failed and we were unable to recover it. 00:29:05.216 [2024-07-24 20:08:52.913573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.216 [2024-07-24 20:08:52.913600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.216 qpair failed and we were unable to recover it. 00:29:05.216 [2024-07-24 20:08:52.914057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.216 [2024-07-24 20:08:52.914065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.216 qpair failed and we were unable to recover it. 00:29:05.216 [2024-07-24 20:08:52.914631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.216 [2024-07-24 20:08:52.914659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.216 qpair failed and we were unable to recover it. 00:29:05.216 [2024-07-24 20:08:52.915091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.216 [2024-07-24 20:08:52.915099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.216 qpair failed and we were unable to recover it. 00:29:05.216 [2024-07-24 20:08:52.915627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.216 [2024-07-24 20:08:52.915656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.216 qpair failed and we were unable to recover it. 00:29:05.216 [2024-07-24 20:08:52.916087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.216 [2024-07-24 20:08:52.916096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.216 qpair failed and we were unable to recover it. 00:29:05.216 [2024-07-24 20:08:52.916526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.216 [2024-07-24 20:08:52.916534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.216 qpair failed and we were unable to recover it. 00:29:05.216 [2024-07-24 20:08:52.916934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.216 [2024-07-24 20:08:52.916941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.216 qpair failed and we were unable to recover it. 00:29:05.216 [2024-07-24 20:08:52.917468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.216 [2024-07-24 20:08:52.917495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.216 qpair failed and we were unable to recover it. 00:29:05.216 [2024-07-24 20:08:52.917969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.216 [2024-07-24 20:08:52.917978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.216 qpair failed and we were unable to recover it. 00:29:05.216 [2024-07-24 20:08:52.918530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.216 [2024-07-24 20:08:52.918557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.216 qpair failed and we were unable to recover it. 00:29:05.216 [2024-07-24 20:08:52.919004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.216 [2024-07-24 20:08:52.919012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.216 qpair failed and we were unable to recover it. 00:29:05.216 [2024-07-24 20:08:52.919415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.216 [2024-07-24 20:08:52.919422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.216 qpair failed and we were unable to recover it. 00:29:05.216 [2024-07-24 20:08:52.919862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.216 [2024-07-24 20:08:52.919868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.216 qpair failed and we were unable to recover it. 00:29:05.216 [2024-07-24 20:08:52.920319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.216 [2024-07-24 20:08:52.920326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.216 qpair failed and we were unable to recover it. 00:29:05.216 [2024-07-24 20:08:52.920742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.216 [2024-07-24 20:08:52.920748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.216 qpair failed and we were unable to recover it. 00:29:05.216 [2024-07-24 20:08:52.921141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.216 [2024-07-24 20:08:52.921148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.216 qpair failed and we were unable to recover it. 00:29:05.216 [2024-07-24 20:08:52.921576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.216 [2024-07-24 20:08:52.921583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.216 qpair failed and we were unable to recover it. 00:29:05.216 [2024-07-24 20:08:52.921874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.216 [2024-07-24 20:08:52.921883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.216 qpair failed and we were unable to recover it. 00:29:05.216 [2024-07-24 20:08:52.922221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.216 [2024-07-24 20:08:52.922229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.216 qpair failed and we were unable to recover it. 00:29:05.216 [2024-07-24 20:08:52.922647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.216 [2024-07-24 20:08:52.922654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.216 qpair failed and we were unable to recover it. 00:29:05.216 [2024-07-24 20:08:52.923076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.216 [2024-07-24 20:08:52.923086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.216 qpair failed and we were unable to recover it. 00:29:05.216 [2024-07-24 20:08:52.923501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.216 [2024-07-24 20:08:52.923508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.216 qpair failed and we were unable to recover it. 00:29:05.216 [2024-07-24 20:08:52.923911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.216 [2024-07-24 20:08:52.923918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.216 qpair failed and we were unable to recover it. 00:29:05.216 [2024-07-24 20:08:52.924360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.216 [2024-07-24 20:08:52.924367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.216 qpair failed and we were unable to recover it. 00:29:05.216 [2024-07-24 20:08:52.924781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.216 [2024-07-24 20:08:52.924788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.216 qpair failed and we were unable to recover it. 00:29:05.216 [2024-07-24 20:08:52.925125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.216 [2024-07-24 20:08:52.925132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.216 qpair failed and we were unable to recover it. 00:29:05.216 [2024-07-24 20:08:52.925568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.216 [2024-07-24 20:08:52.925575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.217 qpair failed and we were unable to recover it. 00:29:05.217 [2024-07-24 20:08:52.925888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.217 [2024-07-24 20:08:52.925894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.217 qpair failed and we were unable to recover it. 00:29:05.217 [2024-07-24 20:08:52.926318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.217 [2024-07-24 20:08:52.926326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.217 qpair failed and we were unable to recover it. 00:29:05.217 [2024-07-24 20:08:52.926728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.217 [2024-07-24 20:08:52.926734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.217 qpair failed and we were unable to recover it. 00:29:05.217 [2024-07-24 20:08:52.927147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.217 [2024-07-24 20:08:52.927153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.217 qpair failed and we were unable to recover it. 00:29:05.217 [2024-07-24 20:08:52.927605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.217 [2024-07-24 20:08:52.927612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.217 qpair failed and we were unable to recover it. 00:29:05.217 [2024-07-24 20:08:52.928026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.217 [2024-07-24 20:08:52.928032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.217 qpair failed and we were unable to recover it. 00:29:05.217 [2024-07-24 20:08:52.928439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.217 [2024-07-24 20:08:52.928447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.217 qpair failed and we were unable to recover it. 00:29:05.217 [2024-07-24 20:08:52.928855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.217 [2024-07-24 20:08:52.928863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.217 qpair failed and we were unable to recover it. 00:29:05.217 [2024-07-24 20:08:52.929390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.217 [2024-07-24 20:08:52.929418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.217 qpair failed and we were unable to recover it. 00:29:05.217 [2024-07-24 20:08:52.929668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.217 [2024-07-24 20:08:52.929678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.217 qpair failed and we were unable to recover it. 00:29:05.217 [2024-07-24 20:08:52.930118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.217 [2024-07-24 20:08:52.930126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.217 qpair failed and we were unable to recover it. 00:29:05.217 [2024-07-24 20:08:52.930609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.217 [2024-07-24 20:08:52.930616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.217 qpair failed and we were unable to recover it. 00:29:05.217 [2024-07-24 20:08:52.931059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.217 [2024-07-24 20:08:52.931066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.217 qpair failed and we were unable to recover it. 00:29:05.217 [2024-07-24 20:08:52.931568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.217 [2024-07-24 20:08:52.931596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.217 qpair failed and we were unable to recover it. 00:29:05.217 [2024-07-24 20:08:52.932011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.217 [2024-07-24 20:08:52.932020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.217 qpair failed and we were unable to recover it. 00:29:05.217 [2024-07-24 20:08:52.932521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.217 [2024-07-24 20:08:52.932548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.217 qpair failed and we were unable to recover it. 00:29:05.217 [2024-07-24 20:08:52.932965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.217 [2024-07-24 20:08:52.932973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.217 qpair failed and we were unable to recover it. 00:29:05.217 [2024-07-24 20:08:52.933492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.217 [2024-07-24 20:08:52.933519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.217 qpair failed and we were unable to recover it. 00:29:05.217 [2024-07-24 20:08:52.933936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.217 [2024-07-24 20:08:52.933945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.217 qpair failed and we were unable to recover it. 00:29:05.217 [2024-07-24 20:08:52.934446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.217 [2024-07-24 20:08:52.934474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.217 qpair failed and we were unable to recover it. 00:29:05.217 [2024-07-24 20:08:52.934898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.217 [2024-07-24 20:08:52.934907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.217 qpair failed and we were unable to recover it. 00:29:05.217 [2024-07-24 20:08:52.935398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.217 [2024-07-24 20:08:52.935425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.217 qpair failed and we were unable to recover it. 00:29:05.217 [2024-07-24 20:08:52.935843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.217 [2024-07-24 20:08:52.935851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.217 qpair failed and we were unable to recover it. 00:29:05.217 [2024-07-24 20:08:52.936254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.217 [2024-07-24 20:08:52.936262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.217 qpair failed and we were unable to recover it. 00:29:05.217 [2024-07-24 20:08:52.936695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.217 [2024-07-24 20:08:52.936702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.217 qpair failed and we were unable to recover it. 00:29:05.217 [2024-07-24 20:08:52.937106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.217 [2024-07-24 20:08:52.937113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.217 qpair failed and we were unable to recover it. 00:29:05.217 [2024-07-24 20:08:52.937449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.217 [2024-07-24 20:08:52.937456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.217 qpair failed and we were unable to recover it. 00:29:05.217 [2024-07-24 20:08:52.937864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.217 [2024-07-24 20:08:52.937872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.217 qpair failed and we were unable to recover it. 00:29:05.217 [2024-07-24 20:08:52.938338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.217 [2024-07-24 20:08:52.938346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.217 qpair failed and we were unable to recover it. 00:29:05.217 [2024-07-24 20:08:52.938716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.217 [2024-07-24 20:08:52.938723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.217 qpair failed and we were unable to recover it. 00:29:05.217 [2024-07-24 20:08:52.939163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.217 [2024-07-24 20:08:52.939170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.217 qpair failed and we were unable to recover it. 00:29:05.217 [2024-07-24 20:08:52.939609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.217 [2024-07-24 20:08:52.939616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.217 qpair failed and we were unable to recover it. 00:29:05.217 [2024-07-24 20:08:52.940040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.217 [2024-07-24 20:08:52.940047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.217 qpair failed and we were unable to recover it. 00:29:05.217 [2024-07-24 20:08:52.940619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.217 [2024-07-24 20:08:52.940650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.217 qpair failed and we were unable to recover it. 00:29:05.217 [2024-07-24 20:08:52.941116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.217 [2024-07-24 20:08:52.941124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.217 qpair failed and we were unable to recover it. 00:29:05.217 [2024-07-24 20:08:52.941539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.217 [2024-07-24 20:08:52.941547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.217 qpair failed and we were unable to recover it. 00:29:05.217 [2024-07-24 20:08:52.941952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.217 [2024-07-24 20:08:52.941958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.217 qpair failed and we were unable to recover it. 00:29:05.218 [2024-07-24 20:08:52.942533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.218 [2024-07-24 20:08:52.942560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.218 qpair failed and we were unable to recover it. 00:29:05.218 [2024-07-24 20:08:52.942879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.218 [2024-07-24 20:08:52.942888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.218 qpair failed and we were unable to recover it. 00:29:05.218 [2024-07-24 20:08:52.943312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.218 [2024-07-24 20:08:52.943320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.218 qpair failed and we were unable to recover it. 00:29:05.218 [2024-07-24 20:08:52.943522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.218 [2024-07-24 20:08:52.943532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.218 qpair failed and we were unable to recover it. 00:29:05.218 [2024-07-24 20:08:52.943920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.218 [2024-07-24 20:08:52.943927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.218 qpair failed and we were unable to recover it. 00:29:05.218 [2024-07-24 20:08:52.944370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.218 [2024-07-24 20:08:52.944377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.218 qpair failed and we were unable to recover it. 00:29:05.218 [2024-07-24 20:08:52.944829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.218 [2024-07-24 20:08:52.944835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.218 qpair failed and we were unable to recover it. 00:29:05.218 [2024-07-24 20:08:52.945235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.218 [2024-07-24 20:08:52.945243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.218 qpair failed and we were unable to recover it. 00:29:05.218 [2024-07-24 20:08:52.945664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.218 [2024-07-24 20:08:52.945671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.218 qpair failed and we were unable to recover it. 00:29:05.218 [2024-07-24 20:08:52.946109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.218 [2024-07-24 20:08:52.946116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.218 qpair failed and we were unable to recover it. 00:29:05.218 [2024-07-24 20:08:52.946533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.218 [2024-07-24 20:08:52.946540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.218 qpair failed and we were unable to recover it. 00:29:05.218 [2024-07-24 20:08:52.946938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.218 [2024-07-24 20:08:52.946944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.218 qpair failed and we were unable to recover it. 00:29:05.218 [2024-07-24 20:08:52.947397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.218 [2024-07-24 20:08:52.947404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.218 qpair failed and we were unable to recover it. 00:29:05.218 [2024-07-24 20:08:52.947511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.218 [2024-07-24 20:08:52.947520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.218 qpair failed and we were unable to recover it. 00:29:05.218 [2024-07-24 20:08:52.948025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.218 [2024-07-24 20:08:52.948033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.218 qpair failed and we were unable to recover it. 00:29:05.218 [2024-07-24 20:08:52.948425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.218 [2024-07-24 20:08:52.948433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.218 qpair failed and we were unable to recover it. 00:29:05.218 [2024-07-24 20:08:52.948861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.218 [2024-07-24 20:08:52.948867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.218 qpair failed and we were unable to recover it. 00:29:05.218 [2024-07-24 20:08:52.949169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.218 [2024-07-24 20:08:52.949177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.218 qpair failed and we were unable to recover it. 00:29:05.218 [2024-07-24 20:08:52.949623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.218 [2024-07-24 20:08:52.949629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.218 qpair failed and we were unable to recover it. 00:29:05.218 [2024-07-24 20:08:52.949819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.218 [2024-07-24 20:08:52.949827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.218 qpair failed and we were unable to recover it. 00:29:05.218 [2024-07-24 20:08:52.950261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.218 [2024-07-24 20:08:52.950269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.218 qpair failed and we were unable to recover it. 00:29:05.218 [2024-07-24 20:08:52.950693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.218 [2024-07-24 20:08:52.950700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.218 qpair failed and we were unable to recover it. 00:29:05.218 [2024-07-24 20:08:52.951146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.218 [2024-07-24 20:08:52.951153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.218 qpair failed and we were unable to recover it. 00:29:05.218 [2024-07-24 20:08:52.951562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.218 [2024-07-24 20:08:52.951569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.218 qpair failed and we were unable to recover it. 00:29:05.218 [2024-07-24 20:08:52.951773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.218 [2024-07-24 20:08:52.951781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.218 qpair failed and we were unable to recover it. 00:29:05.218 [2024-07-24 20:08:52.952210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.218 [2024-07-24 20:08:52.952218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.218 qpair failed and we were unable to recover it. 00:29:05.218 [2024-07-24 20:08:52.952317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.218 [2024-07-24 20:08:52.952325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.218 qpair failed and we were unable to recover it. 00:29:05.218 [2024-07-24 20:08:52.952755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.218 [2024-07-24 20:08:52.952763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.218 qpair failed and we were unable to recover it. 00:29:05.218 [2024-07-24 20:08:52.953730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.218 [2024-07-24 20:08:52.953746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.218 qpair failed and we were unable to recover it. 00:29:05.218 [2024-07-24 20:08:52.954143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.218 [2024-07-24 20:08:52.954150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.218 qpair failed and we were unable to recover it. 00:29:05.218 [2024-07-24 20:08:52.954561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.219 [2024-07-24 20:08:52.954569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.219 qpair failed and we were unable to recover it. 00:29:05.219 [2024-07-24 20:08:52.954971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.219 [2024-07-24 20:08:52.954978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.219 qpair failed and we were unable to recover it. 00:29:05.219 [2024-07-24 20:08:52.955315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.219 [2024-07-24 20:08:52.955322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.219 qpair failed and we were unable to recover it. 00:29:05.219 [2024-07-24 20:08:52.955753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.219 [2024-07-24 20:08:52.955760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.219 qpair failed and we were unable to recover it. 00:29:05.219 [2024-07-24 20:08:52.956160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.219 [2024-07-24 20:08:52.956167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.219 qpair failed and we were unable to recover it. 00:29:05.219 [2024-07-24 20:08:52.956568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.219 [2024-07-24 20:08:52.956575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.219 qpair failed and we were unable to recover it. 00:29:05.219 [2024-07-24 20:08:52.956898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.219 [2024-07-24 20:08:52.956907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.219 qpair failed and we were unable to recover it. 00:29:05.219 [2024-07-24 20:08:52.957306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.219 [2024-07-24 20:08:52.957314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.219 qpair failed and we were unable to recover it. 00:29:05.219 [2024-07-24 20:08:52.957749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.219 [2024-07-24 20:08:52.957755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.219 qpair failed and we were unable to recover it. 00:29:05.219 [2024-07-24 20:08:52.958197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.219 [2024-07-24 20:08:52.958208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.219 qpair failed and we were unable to recover it. 00:29:05.219 [2024-07-24 20:08:52.958612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.219 [2024-07-24 20:08:52.958618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.219 qpair failed and we were unable to recover it. 00:29:05.219 [2024-07-24 20:08:52.959138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.219 [2024-07-24 20:08:52.959145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.219 qpair failed and we were unable to recover it. 00:29:05.219 [2024-07-24 20:08:52.959542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.219 [2024-07-24 20:08:52.959549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.219 qpair failed and we were unable to recover it. 00:29:05.219 [2024-07-24 20:08:52.959951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.219 [2024-07-24 20:08:52.959959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.219 qpair failed and we were unable to recover it. 00:29:05.219 [2024-07-24 20:08:52.960517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.219 [2024-07-24 20:08:52.960545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.219 qpair failed and we were unable to recover it. 00:29:05.219 [2024-07-24 20:08:52.960958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.219 [2024-07-24 20:08:52.960967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.219 qpair failed and we were unable to recover it. 00:29:05.219 [2024-07-24 20:08:52.961560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.219 [2024-07-24 20:08:52.961588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.219 qpair failed and we were unable to recover it. 00:29:05.219 [2024-07-24 20:08:52.962022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.219 [2024-07-24 20:08:52.962032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.219 qpair failed and we were unable to recover it. 00:29:05.219 [2024-07-24 20:08:52.962533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.219 [2024-07-24 20:08:52.962561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.219 qpair failed and we were unable to recover it. 00:29:05.219 [2024-07-24 20:08:52.962894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.219 [2024-07-24 20:08:52.962902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.219 qpair failed and we were unable to recover it. 00:29:05.219 [2024-07-24 20:08:52.963440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.219 [2024-07-24 20:08:52.963468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.219 qpair failed and we were unable to recover it. 00:29:05.219 [2024-07-24 20:08:52.963884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.219 [2024-07-24 20:08:52.963893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.219 qpair failed and we were unable to recover it. 00:29:05.219 [2024-07-24 20:08:52.964295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.219 [2024-07-24 20:08:52.964302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.219 qpair failed and we were unable to recover it. 00:29:05.219 [2024-07-24 20:08:52.964716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.219 [2024-07-24 20:08:52.964722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.219 qpair failed and we were unable to recover it. 00:29:05.219 [2024-07-24 20:08:52.965126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.219 [2024-07-24 20:08:52.965132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.219 qpair failed and we were unable to recover it. 00:29:05.219 [2024-07-24 20:08:52.965611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.219 [2024-07-24 20:08:52.965618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.219 qpair failed and we were unable to recover it. 00:29:05.219 [2024-07-24 20:08:52.966038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.219 [2024-07-24 20:08:52.966045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.219 qpair failed and we were unable to recover it. 00:29:05.219 [2024-07-24 20:08:52.966557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.219 [2024-07-24 20:08:52.966585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.219 qpair failed and we were unable to recover it. 00:29:05.219 [2024-07-24 20:08:52.967000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.219 [2024-07-24 20:08:52.967008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.219 qpair failed and we were unable to recover it. 00:29:05.219 [2024-07-24 20:08:52.967509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.219 [2024-07-24 20:08:52.967537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.219 qpair failed and we were unable to recover it. 00:29:05.219 [2024-07-24 20:08:52.967952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.219 [2024-07-24 20:08:52.967960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.219 qpair failed and we were unable to recover it. 00:29:05.219 [2024-07-24 20:08:52.968473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.219 [2024-07-24 20:08:52.968501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.219 qpair failed and we were unable to recover it. 00:29:05.219 [2024-07-24 20:08:52.968922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.219 [2024-07-24 20:08:52.968931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.219 qpair failed and we were unable to recover it. 00:29:05.219 [2024-07-24 20:08:52.969446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.219 [2024-07-24 20:08:52.969473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.219 qpair failed and we were unable to recover it. 00:29:05.219 [2024-07-24 20:08:52.969901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.219 [2024-07-24 20:08:52.969910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.219 qpair failed and we were unable to recover it. 00:29:05.219 [2024-07-24 20:08:52.970355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.219 [2024-07-24 20:08:52.970364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.219 qpair failed and we were unable to recover it. 00:29:05.219 [2024-07-24 20:08:52.970783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.219 [2024-07-24 20:08:52.970790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.219 qpair failed and we were unable to recover it. 00:29:05.220 [2024-07-24 20:08:52.971303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.220 [2024-07-24 20:08:52.971311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.220 qpair failed and we were unable to recover it. 00:29:05.220 [2024-07-24 20:08:52.971537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.220 [2024-07-24 20:08:52.971545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.220 qpair failed and we were unable to recover it. 00:29:05.220 [2024-07-24 20:08:52.971928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.220 [2024-07-24 20:08:52.971935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.220 qpair failed and we were unable to recover it. 00:29:05.220 [2024-07-24 20:08:52.972378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.220 [2024-07-24 20:08:52.972386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.220 qpair failed and we were unable to recover it. 00:29:05.220 [2024-07-24 20:08:52.972820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.220 [2024-07-24 20:08:52.972827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.220 qpair failed and we were unable to recover it. 00:29:05.220 [2024-07-24 20:08:52.973225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.220 [2024-07-24 20:08:52.973232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.220 qpair failed and we were unable to recover it. 00:29:05.220 [2024-07-24 20:08:52.973366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.220 [2024-07-24 20:08:52.973373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.220 qpair failed and we were unable to recover it. 00:29:05.220 [2024-07-24 20:08:52.973609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.220 [2024-07-24 20:08:52.973616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.220 qpair failed and we were unable to recover it. 00:29:05.220 [2024-07-24 20:08:52.973966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.220 [2024-07-24 20:08:52.973973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.220 qpair failed and we were unable to recover it. 00:29:05.220 [2024-07-24 20:08:52.974414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.220 [2024-07-24 20:08:52.974424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.220 qpair failed and we were unable to recover it. 00:29:05.220 [2024-07-24 20:08:52.974857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.220 [2024-07-24 20:08:52.974864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.220 qpair failed and we were unable to recover it. 00:29:05.220 [2024-07-24 20:08:52.975313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.220 [2024-07-24 20:08:52.975322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.220 qpair failed and we were unable to recover it. 00:29:05.220 [2024-07-24 20:08:52.975730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.220 [2024-07-24 20:08:52.975737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.220 qpair failed and we were unable to recover it. 00:29:05.220 [2024-07-24 20:08:52.976180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.220 [2024-07-24 20:08:52.976186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.220 qpair failed and we were unable to recover it. 00:29:05.220 [2024-07-24 20:08:52.976591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.220 [2024-07-24 20:08:52.976598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.220 qpair failed and we were unable to recover it. 00:29:05.220 [2024-07-24 20:08:52.977037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.220 [2024-07-24 20:08:52.977044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.220 qpair failed and we were unable to recover it. 00:29:05.220 [2024-07-24 20:08:52.977445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.220 [2024-07-24 20:08:52.977452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.220 qpair failed and we were unable to recover it. 00:29:05.220 [2024-07-24 20:08:52.977877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.220 [2024-07-24 20:08:52.977884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.220 qpair failed and we were unable to recover it. 00:29:05.220 [2024-07-24 20:08:52.978435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.220 [2024-07-24 20:08:52.978464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.220 qpair failed and we were unable to recover it. 00:29:05.220 [2024-07-24 20:08:52.978911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.220 [2024-07-24 20:08:52.978919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.220 qpair failed and we were unable to recover it. 00:29:05.220 [2024-07-24 20:08:52.979311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.220 [2024-07-24 20:08:52.979319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.220 qpair failed and we were unable to recover it. 00:29:05.220 [2024-07-24 20:08:52.979727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.220 [2024-07-24 20:08:52.979733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.220 qpair failed and we were unable to recover it. 00:29:05.220 [2024-07-24 20:08:52.980132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.220 [2024-07-24 20:08:52.980139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.220 qpair failed and we were unable to recover it. 00:29:05.220 [2024-07-24 20:08:52.980552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.220 [2024-07-24 20:08:52.980559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.220 qpair failed and we were unable to recover it. 00:29:05.220 [2024-07-24 20:08:52.980958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.220 [2024-07-24 20:08:52.980965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.220 qpair failed and we were unable to recover it. 00:29:05.220 [2024-07-24 20:08:52.981360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.220 [2024-07-24 20:08:52.981367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.220 qpair failed and we were unable to recover it. 00:29:05.220 [2024-07-24 20:08:52.981762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.220 [2024-07-24 20:08:52.981769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.220 qpair failed and we were unable to recover it. 00:29:05.220 [2024-07-24 20:08:52.982215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.220 [2024-07-24 20:08:52.982223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.220 qpair failed and we were unable to recover it. 00:29:05.220 [2024-07-24 20:08:52.982541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.220 [2024-07-24 20:08:52.982548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.220 qpair failed and we were unable to recover it. 00:29:05.220 [2024-07-24 20:08:52.982955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.220 [2024-07-24 20:08:52.982961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.220 qpair failed and we were unable to recover it. 00:29:05.220 [2024-07-24 20:08:52.983366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.220 [2024-07-24 20:08:52.983375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.220 qpair failed and we were unable to recover it. 00:29:05.220 [2024-07-24 20:08:52.983719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.220 [2024-07-24 20:08:52.983725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.220 qpair failed and we were unable to recover it. 00:29:05.220 [2024-07-24 20:08:52.984157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.220 [2024-07-24 20:08:52.984163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.220 qpair failed and we were unable to recover it. 00:29:05.220 [2024-07-24 20:08:52.984484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.220 [2024-07-24 20:08:52.984491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.220 qpair failed and we were unable to recover it. 00:29:05.220 [2024-07-24 20:08:52.984931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.220 [2024-07-24 20:08:52.984937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.220 qpair failed and we were unable to recover it. 00:29:05.220 [2024-07-24 20:08:52.985338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.220 [2024-07-24 20:08:52.985345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.220 qpair failed and we were unable to recover it. 00:29:05.220 [2024-07-24 20:08:52.985756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.220 [2024-07-24 20:08:52.985764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.220 qpair failed and we were unable to recover it. 00:29:05.221 [2024-07-24 20:08:52.986162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.221 [2024-07-24 20:08:52.986170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.221 qpair failed and we were unable to recover it. 00:29:05.221 [2024-07-24 20:08:52.986584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.221 [2024-07-24 20:08:52.986593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.221 qpair failed and we were unable to recover it. 00:29:05.221 [2024-07-24 20:08:52.987031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.221 [2024-07-24 20:08:52.987038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.221 qpair failed and we were unable to recover it. 00:29:05.221 [2024-07-24 20:08:52.987557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.221 [2024-07-24 20:08:52.987585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.221 qpair failed and we were unable to recover it. 00:29:05.221 [2024-07-24 20:08:52.988001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.221 [2024-07-24 20:08:52.988009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.221 qpair failed and we were unable to recover it. 00:29:05.221 [2024-07-24 20:08:52.988453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.221 [2024-07-24 20:08:52.988481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.221 qpair failed and we were unable to recover it. 00:29:05.221 [2024-07-24 20:08:52.988961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.221 [2024-07-24 20:08:52.988970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.221 qpair failed and we were unable to recover it. 00:29:05.221 [2024-07-24 20:08:52.989488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.221 [2024-07-24 20:08:52.989516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.221 qpair failed and we were unable to recover it. 00:29:05.221 [2024-07-24 20:08:52.989939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.221 [2024-07-24 20:08:52.989948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.221 qpair failed and we were unable to recover it. 00:29:05.221 [2024-07-24 20:08:52.990480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.221 [2024-07-24 20:08:52.990507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.221 qpair failed and we were unable to recover it. 00:29:05.221 [2024-07-24 20:08:52.990951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.221 [2024-07-24 20:08:52.990959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.221 qpair failed and we were unable to recover it. 00:29:05.221 [2024-07-24 20:08:52.991487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.221 [2024-07-24 20:08:52.991515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.221 qpair failed and we were unable to recover it. 00:29:05.221 [2024-07-24 20:08:52.991931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.221 [2024-07-24 20:08:52.991940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.221 qpair failed and we were unable to recover it. 00:29:05.221 [2024-07-24 20:08:52.992469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.221 [2024-07-24 20:08:52.992497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.221 qpair failed and we were unable to recover it. 00:29:05.221 [2024-07-24 20:08:52.992913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.221 [2024-07-24 20:08:52.992922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.221 qpair failed and we were unable to recover it. 00:29:05.221 [2024-07-24 20:08:52.993408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.221 [2024-07-24 20:08:52.993436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.221 qpair failed and we were unable to recover it. 00:29:05.221 [2024-07-24 20:08:52.993851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.221 [2024-07-24 20:08:52.993859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.221 qpair failed and we were unable to recover it. 00:29:05.221 [2024-07-24 20:08:52.994309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.221 [2024-07-24 20:08:52.994317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.221 qpair failed and we were unable to recover it. 00:29:05.221 [2024-07-24 20:08:52.994791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.221 [2024-07-24 20:08:52.994799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.221 qpair failed and we were unable to recover it. 00:29:05.221 [2024-07-24 20:08:52.995209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.221 [2024-07-24 20:08:52.995216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.221 qpair failed and we were unable to recover it. 00:29:05.221 [2024-07-24 20:08:52.995606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.221 [2024-07-24 20:08:52.995613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.221 qpair failed and we were unable to recover it. 00:29:05.221 [2024-07-24 20:08:52.996052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.221 [2024-07-24 20:08:52.996059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.221 qpair failed and we were unable to recover it. 00:29:05.221 [2024-07-24 20:08:52.996567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.221 [2024-07-24 20:08:52.996594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.221 qpair failed and we were unable to recover it. 00:29:05.221 [2024-07-24 20:08:52.996807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.221 [2024-07-24 20:08:52.996817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.221 qpair failed and we were unable to recover it. 00:29:05.221 [2024-07-24 20:08:52.997259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.221 [2024-07-24 20:08:52.997268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.221 qpair failed and we were unable to recover it. 00:29:05.221 [2024-07-24 20:08:52.997677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.221 [2024-07-24 20:08:52.997683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.221 qpair failed and we were unable to recover it. 00:29:05.221 [2024-07-24 20:08:52.997958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.221 [2024-07-24 20:08:52.997967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.221 qpair failed and we were unable to recover it. 00:29:05.221 [2024-07-24 20:08:52.998288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.221 [2024-07-24 20:08:52.998295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.221 qpair failed and we were unable to recover it. 00:29:05.221 [2024-07-24 20:08:52.998703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.221 [2024-07-24 20:08:52.998710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.221 qpair failed and we were unable to recover it. 00:29:05.221 [2024-07-24 20:08:52.999148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.221 [2024-07-24 20:08:52.999155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.221 qpair failed and we were unable to recover it. 00:29:05.221 [2024-07-24 20:08:52.999566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.221 [2024-07-24 20:08:52.999574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.221 qpair failed and we were unable to recover it. 00:29:05.221 [2024-07-24 20:08:52.999978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.221 [2024-07-24 20:08:52.999986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.221 qpair failed and we were unable to recover it. 00:29:05.221 [2024-07-24 20:08:53.000409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.221 [2024-07-24 20:08:53.000416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.221 qpair failed and we were unable to recover it. 00:29:05.221 [2024-07-24 20:08:53.000617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.221 [2024-07-24 20:08:53.000626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.221 qpair failed and we were unable to recover it. 00:29:05.221 [2024-07-24 20:08:53.001058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.221 [2024-07-24 20:08:53.001064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.221 qpair failed and we were unable to recover it. 00:29:05.221 [2024-07-24 20:08:53.001486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.221 [2024-07-24 20:08:53.001494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.221 qpair failed and we were unable to recover it. 00:29:05.221 [2024-07-24 20:08:53.001897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.221 [2024-07-24 20:08:53.001904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.221 qpair failed and we were unable to recover it. 00:29:05.221 [2024-07-24 20:08:53.002099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.222 [2024-07-24 20:08:53.002107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.222 qpair failed and we were unable to recover it. 00:29:05.222 [2024-07-24 20:08:53.002498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.222 [2024-07-24 20:08:53.002505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.222 qpair failed and we were unable to recover it. 00:29:05.222 [2024-07-24 20:08:53.002903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.222 [2024-07-24 20:08:53.002913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.222 qpair failed and we were unable to recover it. 00:29:05.222 [2024-07-24 20:08:53.003353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.222 [2024-07-24 20:08:53.003360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.222 qpair failed and we were unable to recover it. 00:29:05.222 [2024-07-24 20:08:53.003858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.222 [2024-07-24 20:08:53.003864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.222 qpair failed and we were unable to recover it. 00:29:05.222 [2024-07-24 20:08:53.004376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.222 [2024-07-24 20:08:53.004383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.222 qpair failed and we were unable to recover it. 00:29:05.222 [2024-07-24 20:08:53.004698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.222 [2024-07-24 20:08:53.004705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.222 qpair failed and we were unable to recover it. 00:29:05.222 [2024-07-24 20:08:53.005126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.222 [2024-07-24 20:08:53.005133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.222 qpair failed and we were unable to recover it. 00:29:05.222 [2024-07-24 20:08:53.005442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.222 [2024-07-24 20:08:53.005449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.222 qpair failed and we were unable to recover it. 00:29:05.222 [2024-07-24 20:08:53.005887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.222 [2024-07-24 20:08:53.005894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.222 qpair failed and we were unable to recover it. 00:29:05.222 [2024-07-24 20:08:53.006287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.222 [2024-07-24 20:08:53.006294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.222 qpair failed and we were unable to recover it. 00:29:05.222 [2024-07-24 20:08:53.006616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.222 [2024-07-24 20:08:53.006623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.222 qpair failed and we were unable to recover it. 00:29:05.222 [2024-07-24 20:08:53.007034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.222 [2024-07-24 20:08:53.007040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.222 qpair failed and we were unable to recover it. 00:29:05.222 [2024-07-24 20:08:53.007460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.222 [2024-07-24 20:08:53.007467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.222 qpair failed and we were unable to recover it. 00:29:05.222 [2024-07-24 20:08:53.007912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.222 [2024-07-24 20:08:53.007919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.222 qpair failed and we were unable to recover it. 00:29:05.222 [2024-07-24 20:08:53.008327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.222 [2024-07-24 20:08:53.008333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.222 qpair failed and we were unable to recover it. 00:29:05.222 [2024-07-24 20:08:53.008699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.222 [2024-07-24 20:08:53.008705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.222 qpair failed and we were unable to recover it. 00:29:05.222 [2024-07-24 20:08:53.009150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.222 [2024-07-24 20:08:53.009156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.222 qpair failed and we were unable to recover it. 00:29:05.222 [2024-07-24 20:08:53.009570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.222 [2024-07-24 20:08:53.009578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.222 qpair failed and we were unable to recover it. 00:29:05.222 [2024-07-24 20:08:53.009955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.222 [2024-07-24 20:08:53.009962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.222 qpair failed and we were unable to recover it. 00:29:05.222 [2024-07-24 20:08:53.010457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.222 [2024-07-24 20:08:53.010465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.222 qpair failed and we were unable to recover it. 00:29:05.222 [2024-07-24 20:08:53.010743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.222 [2024-07-24 20:08:53.010750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.222 qpair failed and we were unable to recover it. 00:29:05.222 [2024-07-24 20:08:53.011151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.222 [2024-07-24 20:08:53.011158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.222 qpair failed and we were unable to recover it. 00:29:05.222 [2024-07-24 20:08:53.011567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.222 [2024-07-24 20:08:53.011575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.222 qpair failed and we were unable to recover it. 00:29:05.222 [2024-07-24 20:08:53.011972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.222 [2024-07-24 20:08:53.011978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.222 qpair failed and we were unable to recover it. 00:29:05.222 [2024-07-24 20:08:53.012500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.222 [2024-07-24 20:08:53.012527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.222 qpair failed and we were unable to recover it. 00:29:05.222 [2024-07-24 20:08:53.012949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.222 [2024-07-24 20:08:53.012958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.222 qpair failed and we were unable to recover it. 00:29:05.222 [2024-07-24 20:08:53.013475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.222 [2024-07-24 20:08:53.013502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.222 qpair failed and we were unable to recover it. 00:29:05.222 [2024-07-24 20:08:53.013918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.222 [2024-07-24 20:08:53.013926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.222 qpair failed and we were unable to recover it. 00:29:05.222 [2024-07-24 20:08:53.014441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.222 [2024-07-24 20:08:53.014469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.222 qpair failed and we were unable to recover it. 00:29:05.222 [2024-07-24 20:08:53.014899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.222 [2024-07-24 20:08:53.014908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.222 qpair failed and we were unable to recover it. 00:29:05.222 [2024-07-24 20:08:53.015313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.222 [2024-07-24 20:08:53.015321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.223 qpair failed and we were unable to recover it. 00:29:05.223 [2024-07-24 20:08:53.015757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.223 [2024-07-24 20:08:53.015764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.223 qpair failed and we were unable to recover it. 00:29:05.223 [2024-07-24 20:08:53.016169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.223 [2024-07-24 20:08:53.016175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.223 qpair failed and we were unable to recover it. 00:29:05.223 [2024-07-24 20:08:53.016484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.223 [2024-07-24 20:08:53.016492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.223 qpair failed and we were unable to recover it. 00:29:05.223 [2024-07-24 20:08:53.016916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.223 [2024-07-24 20:08:53.016923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.223 qpair failed and we were unable to recover it. 00:29:05.223 [2024-07-24 20:08:53.017342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.223 [2024-07-24 20:08:53.017349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.223 qpair failed and we were unable to recover it. 00:29:05.223 [2024-07-24 20:08:53.017763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.223 [2024-07-24 20:08:53.017770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.223 qpair failed and we were unable to recover it. 00:29:05.223 [2024-07-24 20:08:53.018211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.223 [2024-07-24 20:08:53.018218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.223 qpair failed and we were unable to recover it. 00:29:05.223 [2024-07-24 20:08:53.018611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.223 [2024-07-24 20:08:53.018617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.223 qpair failed and we were unable to recover it. 00:29:05.223 [2024-07-24 20:08:53.019032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.223 [2024-07-24 20:08:53.019041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.223 qpair failed and we were unable to recover it. 00:29:05.223 [2024-07-24 20:08:53.019438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.223 [2024-07-24 20:08:53.019447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.223 qpair failed and we were unable to recover it. 00:29:05.223 [2024-07-24 20:08:53.019900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.223 [2024-07-24 20:08:53.019910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.223 qpair failed and we were unable to recover it. 00:29:05.223 [2024-07-24 20:08:53.020242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.223 [2024-07-24 20:08:53.020251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.223 qpair failed and we were unable to recover it. 00:29:05.223 [2024-07-24 20:08:53.020687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.223 [2024-07-24 20:08:53.020695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.223 qpair failed and we were unable to recover it. 00:29:05.223 [2024-07-24 20:08:53.021236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.223 [2024-07-24 20:08:53.021252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.223 qpair failed and we were unable to recover it. 00:29:05.223 [2024-07-24 20:08:53.021564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.223 [2024-07-24 20:08:53.021572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.223 qpair failed and we were unable to recover it. 00:29:05.223 [2024-07-24 20:08:53.022060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.223 [2024-07-24 20:08:53.022067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.223 qpair failed and we were unable to recover it. 00:29:05.223 [2024-07-24 20:08:53.022395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.223 [2024-07-24 20:08:53.022402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.223 qpair failed and we were unable to recover it. 00:29:05.223 [2024-07-24 20:08:53.022816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.223 [2024-07-24 20:08:53.022823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.223 qpair failed and we were unable to recover it. 00:29:05.223 [2024-07-24 20:08:53.023239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.223 [2024-07-24 20:08:53.023248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.223 qpair failed and we were unable to recover it. 00:29:05.223 [2024-07-24 20:08:53.023680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.223 [2024-07-24 20:08:53.023686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.223 qpair failed and we were unable to recover it. 00:29:05.223 [2024-07-24 20:08:53.024097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.223 [2024-07-24 20:08:53.024104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.223 qpair failed and we were unable to recover it. 00:29:05.223 [2024-07-24 20:08:53.024421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.223 [2024-07-24 20:08:53.024428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.223 qpair failed and we were unable to recover it. 00:29:05.223 [2024-07-24 20:08:53.024665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.223 [2024-07-24 20:08:53.024674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.223 qpair failed and we were unable to recover it. 00:29:05.223 [2024-07-24 20:08:53.024949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.223 [2024-07-24 20:08:53.024957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.223 qpair failed and we were unable to recover it. 00:29:05.223 [2024-07-24 20:08:53.025270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.223 [2024-07-24 20:08:53.025277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.223 qpair failed and we were unable to recover it. 00:29:05.223 [2024-07-24 20:08:53.025737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.223 [2024-07-24 20:08:53.025744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.223 qpair failed and we were unable to recover it. 00:29:05.223 [2024-07-24 20:08:53.026075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.223 [2024-07-24 20:08:53.026083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.223 qpair failed and we were unable to recover it. 00:29:05.223 [2024-07-24 20:08:53.026412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.223 [2024-07-24 20:08:53.026419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.223 qpair failed and we were unable to recover it. 00:29:05.223 [2024-07-24 20:08:53.026824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.223 [2024-07-24 20:08:53.026830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.223 qpair failed and we were unable to recover it. 00:29:05.223 [2024-07-24 20:08:53.027343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.223 [2024-07-24 20:08:53.027355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.223 qpair failed and we were unable to recover it. 00:29:05.223 [2024-07-24 20:08:53.027775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.223 [2024-07-24 20:08:53.027783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.223 qpair failed and we were unable to recover it. 00:29:05.223 [2024-07-24 20:08:53.028187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.223 [2024-07-24 20:08:53.028195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.223 qpair failed and we were unable to recover it. 00:29:05.223 [2024-07-24 20:08:53.028620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.223 [2024-07-24 20:08:53.028628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.223 qpair failed and we were unable to recover it. 00:29:05.223 [2024-07-24 20:08:53.029029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.223 [2024-07-24 20:08:53.029037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.223 qpair failed and we were unable to recover it. 00:29:05.223 [2024-07-24 20:08:53.029461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.223 [2024-07-24 20:08:53.029469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.223 qpair failed and we were unable to recover it. 00:29:05.223 [2024-07-24 20:08:53.029912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.223 [2024-07-24 20:08:53.029919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.223 qpair failed and we were unable to recover it. 00:29:05.223 [2024-07-24 20:08:53.030439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.224 [2024-07-24 20:08:53.030467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.224 qpair failed and we were unable to recover it. 00:29:05.224 [2024-07-24 20:08:53.030905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.224 [2024-07-24 20:08:53.030915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.224 qpair failed and we were unable to recover it. 00:29:05.224 [2024-07-24 20:08:53.031371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.224 [2024-07-24 20:08:53.031380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.224 qpair failed and we were unable to recover it. 00:29:05.224 [2024-07-24 20:08:53.032048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.224 [2024-07-24 20:08:53.032064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.224 qpair failed and we were unable to recover it. 00:29:05.224 [2024-07-24 20:08:53.032483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.224 [2024-07-24 20:08:53.032491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.224 qpair failed and we were unable to recover it. 00:29:05.224 [2024-07-24 20:08:53.032883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.224 [2024-07-24 20:08:53.032891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.224 qpair failed and we were unable to recover it. 00:29:05.224 [2024-07-24 20:08:53.033209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.224 [2024-07-24 20:08:53.033217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.224 qpair failed and we were unable to recover it. 00:29:05.224 [2024-07-24 20:08:53.033654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.224 [2024-07-24 20:08:53.033667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.224 qpair failed and we were unable to recover it. 00:29:05.224 [2024-07-24 20:08:53.034089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.224 [2024-07-24 20:08:53.034096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.224 qpair failed and we were unable to recover it. 00:29:05.224 [2024-07-24 20:08:53.034576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.224 [2024-07-24 20:08:53.034585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.224 qpair failed and we were unable to recover it. 00:29:05.224 [2024-07-24 20:08:53.034994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.224 [2024-07-24 20:08:53.035001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.224 qpair failed and we were unable to recover it. 00:29:05.224 [2024-07-24 20:08:53.035429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.224 [2024-07-24 20:08:53.035437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.224 qpair failed and we were unable to recover it. 00:29:05.224 [2024-07-24 20:08:53.035856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.224 [2024-07-24 20:08:53.035863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.224 qpair failed and we were unable to recover it. 00:29:05.224 [2024-07-24 20:08:53.036300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.224 [2024-07-24 20:08:53.036308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.224 qpair failed and we were unable to recover it. 00:29:05.224 [2024-07-24 20:08:53.036722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.224 [2024-07-24 20:08:53.036731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.224 qpair failed and we were unable to recover it. 00:29:05.224 [2024-07-24 20:08:53.037133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.224 [2024-07-24 20:08:53.037141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.224 qpair failed and we were unable to recover it. 00:29:05.224 [2024-07-24 20:08:53.037564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.224 [2024-07-24 20:08:53.037571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.224 qpair failed and we were unable to recover it. 00:29:05.224 [2024-07-24 20:08:53.038018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.224 [2024-07-24 20:08:53.038025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.224 qpair failed and we were unable to recover it. 00:29:05.224 [2024-07-24 20:08:53.038237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.224 [2024-07-24 20:08:53.038245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.224 qpair failed and we were unable to recover it. 00:29:05.224 [2024-07-24 20:08:53.038576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.224 [2024-07-24 20:08:53.038583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.224 qpair failed and we were unable to recover it. 00:29:05.224 [2024-07-24 20:08:53.039025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.224 [2024-07-24 20:08:53.039032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.224 qpair failed and we were unable to recover it. 00:29:05.224 [2024-07-24 20:08:53.039524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.224 [2024-07-24 20:08:53.039530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.224 qpair failed and we were unable to recover it. 00:29:05.224 [2024-07-24 20:08:53.039933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.224 [2024-07-24 20:08:53.039939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.224 qpair failed and we were unable to recover it. 00:29:05.224 [2024-07-24 20:08:53.040436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.224 [2024-07-24 20:08:53.040464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.224 qpair failed and we were unable to recover it. 00:29:05.224 [2024-07-24 20:08:53.040883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.224 [2024-07-24 20:08:53.040892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.224 qpair failed and we were unable to recover it. 00:29:05.224 [2024-07-24 20:08:53.041299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.224 [2024-07-24 20:08:53.041306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.224 qpair failed and we were unable to recover it. 00:29:05.224 [2024-07-24 20:08:53.041765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.224 [2024-07-24 20:08:53.041772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.224 qpair failed and we were unable to recover it. 00:29:05.224 [2024-07-24 20:08:53.041848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.224 [2024-07-24 20:08:53.041859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.224 qpair failed and we were unable to recover it. 00:29:05.224 [2024-07-24 20:08:53.042275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.224 [2024-07-24 20:08:53.042284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.224 qpair failed and we were unable to recover it. 00:29:05.224 [2024-07-24 20:08:53.042799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.224 [2024-07-24 20:08:53.042806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.224 qpair failed and we were unable to recover it. 00:29:05.224 [2024-07-24 20:08:53.043082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.224 [2024-07-24 20:08:53.043088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.224 qpair failed and we were unable to recover it. 00:29:05.224 [2024-07-24 20:08:53.043532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.224 [2024-07-24 20:08:53.043539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.224 qpair failed and we were unable to recover it. 00:29:05.224 [2024-07-24 20:08:53.043977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.224 [2024-07-24 20:08:53.043984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.224 qpair failed and we were unable to recover it. 00:29:05.224 [2024-07-24 20:08:53.044390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.224 [2024-07-24 20:08:53.044397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.224 qpair failed and we were unable to recover it. 00:29:05.224 [2024-07-24 20:08:53.044684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.224 [2024-07-24 20:08:53.044691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.224 qpair failed and we were unable to recover it. 00:29:05.224 [2024-07-24 20:08:53.045103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.224 [2024-07-24 20:08:53.045109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.224 qpair failed and we were unable to recover it. 00:29:05.224 [2024-07-24 20:08:53.045578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.224 [2024-07-24 20:08:53.045585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.224 qpair failed and we were unable to recover it. 00:29:05.224 [2024-07-24 20:08:53.045993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.224 [2024-07-24 20:08:53.046000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-07-24 20:08:53.046427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.225 [2024-07-24 20:08:53.046434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-07-24 20:08:53.046858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.225 [2024-07-24 20:08:53.046865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-07-24 20:08:53.047274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.225 [2024-07-24 20:08:53.047280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-07-24 20:08:53.047740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.225 [2024-07-24 20:08:53.047747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-07-24 20:08:53.048178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.225 [2024-07-24 20:08:53.048186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-07-24 20:08:53.048343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.225 [2024-07-24 20:08:53.048353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-07-24 20:08:53.048790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.225 [2024-07-24 20:08:53.048798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-07-24 20:08:53.049203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.225 [2024-07-24 20:08:53.049211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-07-24 20:08:53.049549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.225 [2024-07-24 20:08:53.049555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-07-24 20:08:53.049981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.225 [2024-07-24 20:08:53.049988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-07-24 20:08:53.050417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.225 [2024-07-24 20:08:53.050445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-07-24 20:08:53.050864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.225 [2024-07-24 20:08:53.050873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-07-24 20:08:53.051280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.225 [2024-07-24 20:08:53.051288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-07-24 20:08:53.051624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.225 [2024-07-24 20:08:53.051631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-07-24 20:08:53.051848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.225 [2024-07-24 20:08:53.051857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-07-24 20:08:53.052108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.225 [2024-07-24 20:08:53.052115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-07-24 20:08:53.052572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.225 [2024-07-24 20:08:53.052583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-07-24 20:08:53.052789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.225 [2024-07-24 20:08:53.052795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-07-24 20:08:53.053117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.225 [2024-07-24 20:08:53.053124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-07-24 20:08:53.053541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.225 [2024-07-24 20:08:53.053548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-07-24 20:08:53.053875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.225 [2024-07-24 20:08:53.053881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-07-24 20:08:53.054318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.225 [2024-07-24 20:08:53.054326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-07-24 20:08:53.054637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.225 [2024-07-24 20:08:53.054644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-07-24 20:08:53.054825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.225 [2024-07-24 20:08:53.054832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-07-24 20:08:53.055284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.225 [2024-07-24 20:08:53.055291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-07-24 20:08:53.055744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.225 [2024-07-24 20:08:53.055751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-07-24 20:08:53.056176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.225 [2024-07-24 20:08:53.056183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-07-24 20:08:53.056562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.225 [2024-07-24 20:08:53.056569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-07-24 20:08:53.057011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.225 [2024-07-24 20:08:53.057020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-07-24 20:08:53.057436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.225 [2024-07-24 20:08:53.057443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-07-24 20:08:53.057883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.225 [2024-07-24 20:08:53.057890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-07-24 20:08:53.058302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.225 [2024-07-24 20:08:53.058309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-07-24 20:08:53.058608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.225 [2024-07-24 20:08:53.058614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-07-24 20:08:53.059062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.225 [2024-07-24 20:08:53.059069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-07-24 20:08:53.059485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.225 [2024-07-24 20:08:53.059492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-07-24 20:08:53.059893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.225 [2024-07-24 20:08:53.059899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.225 qpair failed and we were unable to recover it. 00:29:05.225 [2024-07-24 20:08:53.060456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.225 [2024-07-24 20:08:53.060484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.226 qpair failed and we were unable to recover it. 00:29:05.226 [2024-07-24 20:08:53.060840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.226 [2024-07-24 20:08:53.060848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.226 qpair failed and we were unable to recover it. 00:29:05.226 [2024-07-24 20:08:53.061303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.226 [2024-07-24 20:08:53.061311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.226 qpair failed and we were unable to recover it. 00:29:05.226 [2024-07-24 20:08:53.061738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.226 [2024-07-24 20:08:53.061745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.226 qpair failed and we were unable to recover it. 00:29:05.226 [2024-07-24 20:08:53.062170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.226 [2024-07-24 20:08:53.062177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.226 qpair failed and we were unable to recover it. 00:29:05.226 [2024-07-24 20:08:53.062609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.226 [2024-07-24 20:08:53.062618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.226 qpair failed and we were unable to recover it. 00:29:05.226 [2024-07-24 20:08:53.063067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.226 [2024-07-24 20:08:53.063073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.226 qpair failed and we were unable to recover it. 00:29:05.226 [2024-07-24 20:08:53.063559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.226 [2024-07-24 20:08:53.063586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.226 qpair failed and we were unable to recover it. 00:29:05.226 [2024-07-24 20:08:53.064003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.226 [2024-07-24 20:08:53.064012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.226 qpair failed and we were unable to recover it. 00:29:05.226 [2024-07-24 20:08:53.064526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.226 [2024-07-24 20:08:53.064553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.226 qpair failed and we were unable to recover it. 00:29:05.226 [2024-07-24 20:08:53.064987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.226 [2024-07-24 20:08:53.064997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.226 qpair failed and we were unable to recover it. 00:29:05.226 [2024-07-24 20:08:53.065420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.226 [2024-07-24 20:08:53.065448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.226 qpair failed and we were unable to recover it. 00:29:05.226 [2024-07-24 20:08:53.065698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.226 [2024-07-24 20:08:53.065706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.226 qpair failed and we were unable to recover it. 00:29:05.226 [2024-07-24 20:08:53.066005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.226 [2024-07-24 20:08:53.066019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.226 qpair failed and we were unable to recover it. 00:29:05.226 [2024-07-24 20:08:53.066449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.226 [2024-07-24 20:08:53.066457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.226 qpair failed and we were unable to recover it. 00:29:05.226 [2024-07-24 20:08:53.066877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.226 [2024-07-24 20:08:53.066885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.226 qpair failed and we were unable to recover it. 00:29:05.226 [2024-07-24 20:08:53.067115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.226 [2024-07-24 20:08:53.067121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.226 qpair failed and we were unable to recover it. 00:29:05.226 [2024-07-24 20:08:53.067554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.226 [2024-07-24 20:08:53.067561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.226 qpair failed and we were unable to recover it. 00:29:05.226 [2024-07-24 20:08:53.067967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.226 [2024-07-24 20:08:53.067974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.226 qpair failed and we were unable to recover it. 00:29:05.226 [2024-07-24 20:08:53.068374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.226 [2024-07-24 20:08:53.068381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.226 qpair failed and we were unable to recover it. 00:29:05.226 [2024-07-24 20:08:53.068804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.226 [2024-07-24 20:08:53.068814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.226 qpair failed and we were unable to recover it. 00:29:05.226 [2024-07-24 20:08:53.069270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.226 [2024-07-24 20:08:53.069277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.226 qpair failed and we were unable to recover it. 00:29:05.226 [2024-07-24 20:08:53.069745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.226 [2024-07-24 20:08:53.069751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.226 qpair failed and we were unable to recover it. 00:29:05.226 [2024-07-24 20:08:53.070067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.226 [2024-07-24 20:08:53.070074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.226 qpair failed and we were unable to recover it. 00:29:05.226 [2024-07-24 20:08:53.070501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.226 [2024-07-24 20:08:53.070509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.226 qpair failed and we were unable to recover it. 00:29:05.226 [2024-07-24 20:08:53.070952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.226 [2024-07-24 20:08:53.070959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.226 qpair failed and we were unable to recover it. 00:29:05.226 [2024-07-24 20:08:53.071523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.226 [2024-07-24 20:08:53.071550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.226 qpair failed and we were unable to recover it. 00:29:05.226 [2024-07-24 20:08:53.071874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.226 [2024-07-24 20:08:53.071883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.226 qpair failed and we were unable to recover it. 00:29:05.226 [2024-07-24 20:08:53.072190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.226 [2024-07-24 20:08:53.072197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.226 qpair failed and we were unable to recover it. 00:29:05.226 [2024-07-24 20:08:53.072636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.226 [2024-07-24 20:08:53.072643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.226 qpair failed and we were unable to recover it. 00:29:05.226 [2024-07-24 20:08:53.073069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.226 [2024-07-24 20:08:53.073076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.226 qpair failed and we were unable to recover it. 00:29:05.226 [2024-07-24 20:08:53.073614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.226 [2024-07-24 20:08:53.073642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.226 qpair failed and we were unable to recover it. 00:29:05.226 [2024-07-24 20:08:53.073973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.226 [2024-07-24 20:08:53.073982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.226 qpair failed and we were unable to recover it. 00:29:05.226 [2024-07-24 20:08:53.074547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.226 [2024-07-24 20:08:53.074574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.226 qpair failed and we were unable to recover it. 00:29:05.226 [2024-07-24 20:08:53.074887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.226 [2024-07-24 20:08:53.074896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.226 qpair failed and we were unable to recover it. 00:29:05.226 [2024-07-24 20:08:53.075213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.226 [2024-07-24 20:08:53.075222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.226 qpair failed and we were unable to recover it. 00:29:05.226 [2024-07-24 20:08:53.075638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.226 [2024-07-24 20:08:53.075645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.226 qpair failed and we were unable to recover it. 00:29:05.226 [2024-07-24 20:08:53.076053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.226 [2024-07-24 20:08:53.076060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.226 qpair failed and we were unable to recover it. 00:29:05.226 [2024-07-24 20:08:53.076463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.227 [2024-07-24 20:08:53.076470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.227 qpair failed and we were unable to recover it. 00:29:05.227 [2024-07-24 20:08:53.076893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.227 [2024-07-24 20:08:53.076900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.227 qpair failed and we were unable to recover it. 00:29:05.227 [2024-07-24 20:08:53.077406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.227 [2024-07-24 20:08:53.077434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.227 qpair failed and we were unable to recover it. 00:29:05.227 [2024-07-24 20:08:53.077917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.227 [2024-07-24 20:08:53.077925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.227 qpair failed and we were unable to recover it. 00:29:05.227 [2024-07-24 20:08:53.078435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.227 [2024-07-24 20:08:53.078463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.227 qpair failed and we were unable to recover it. 00:29:05.227 [2024-07-24 20:08:53.078879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.227 [2024-07-24 20:08:53.078888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.227 qpair failed and we were unable to recover it. 00:29:05.227 [2024-07-24 20:08:53.079263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.227 [2024-07-24 20:08:53.079271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.227 qpair failed and we were unable to recover it. 00:29:05.227 [2024-07-24 20:08:53.079571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.227 [2024-07-24 20:08:53.079578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.227 qpair failed and we were unable to recover it. 00:29:05.227 [2024-07-24 20:08:53.080006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.227 [2024-07-24 20:08:53.080012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.227 qpair failed and we were unable to recover it. 00:29:05.227 [2024-07-24 20:08:53.080487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.227 [2024-07-24 20:08:53.080494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.227 qpair failed and we were unable to recover it. 00:29:05.227 [2024-07-24 20:08:53.080891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.227 [2024-07-24 20:08:53.080897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.227 qpair failed and we were unable to recover it. 00:29:05.227 [2024-07-24 20:08:53.081370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.227 [2024-07-24 20:08:53.081377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.227 qpair failed and we were unable to recover it. 00:29:05.227 [2024-07-24 20:08:53.081679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.227 [2024-07-24 20:08:53.081687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.227 qpair failed and we were unable to recover it. 00:29:05.227 [2024-07-24 20:08:53.082118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.227 [2024-07-24 20:08:53.082124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.227 qpair failed and we were unable to recover it. 00:29:05.227 [2024-07-24 20:08:53.082535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.227 [2024-07-24 20:08:53.082542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.227 qpair failed and we were unable to recover it. 00:29:05.227 [2024-07-24 20:08:53.082945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.227 [2024-07-24 20:08:53.082952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.227 qpair failed and we were unable to recover it. 00:29:05.227 [2024-07-24 20:08:53.083274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.227 [2024-07-24 20:08:53.083282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.227 qpair failed and we were unable to recover it. 00:29:05.227 [2024-07-24 20:08:53.083672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.227 [2024-07-24 20:08:53.083678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.227 qpair failed and we were unable to recover it. 00:29:05.227 [2024-07-24 20:08:53.084094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.227 [2024-07-24 20:08:53.084100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.227 qpair failed and we were unable to recover it. 00:29:05.227 [2024-07-24 20:08:53.084533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.227 [2024-07-24 20:08:53.084540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.227 qpair failed and we were unable to recover it. 00:29:05.227 [2024-07-24 20:08:53.084944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.227 [2024-07-24 20:08:53.084950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.227 qpair failed and we were unable to recover it. 00:29:05.227 [2024-07-24 20:08:53.085335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.227 [2024-07-24 20:08:53.085343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.227 qpair failed and we were unable to recover it. 00:29:05.227 [2024-07-24 20:08:53.085863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.227 [2024-07-24 20:08:53.085871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.227 qpair failed and we were unable to recover it. 00:29:05.227 [2024-07-24 20:08:53.086312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.227 [2024-07-24 20:08:53.086319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.227 qpair failed and we were unable to recover it. 00:29:05.227 [2024-07-24 20:08:53.086603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.227 [2024-07-24 20:08:53.086610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.227 qpair failed and we were unable to recover it. 00:29:05.227 [2024-07-24 20:08:53.087025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.227 [2024-07-24 20:08:53.087033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.227 qpair failed and we were unable to recover it. 00:29:05.227 [2024-07-24 20:08:53.087458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.227 [2024-07-24 20:08:53.087465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.227 qpair failed and we were unable to recover it. 00:29:05.227 [2024-07-24 20:08:53.087867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.227 [2024-07-24 20:08:53.087874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.227 qpair failed and we were unable to recover it. 00:29:05.227 [2024-07-24 20:08:53.088164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.227 [2024-07-24 20:08:53.088170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.227 qpair failed and we were unable to recover it. 00:29:05.227 [2024-07-24 20:08:53.088582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.227 [2024-07-24 20:08:53.088588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.227 qpair failed and we were unable to recover it. 00:29:05.227 [2024-07-24 20:08:53.088998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.227 [2024-07-24 20:08:53.089004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.227 qpair failed and we were unable to recover it. 00:29:05.228 [2024-07-24 20:08:53.089540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.228 [2024-07-24 20:08:53.089568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.228 qpair failed and we were unable to recover it. 00:29:05.228 [2024-07-24 20:08:53.089994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.228 [2024-07-24 20:08:53.090002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.228 qpair failed and we were unable to recover it. 00:29:05.228 [2024-07-24 20:08:53.090541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.228 [2024-07-24 20:08:53.090569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.228 qpair failed and we were unable to recover it. 00:29:05.228 [2024-07-24 20:08:53.090987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.228 [2024-07-24 20:08:53.090996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.228 qpair failed and we were unable to recover it. 00:29:05.228 [2024-07-24 20:08:53.091440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.228 [2024-07-24 20:08:53.091468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.228 qpair failed and we were unable to recover it. 00:29:05.228 [2024-07-24 20:08:53.091941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.228 [2024-07-24 20:08:53.091949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.228 qpair failed and we were unable to recover it. 00:29:05.228 [2024-07-24 20:08:53.092396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.228 [2024-07-24 20:08:53.092423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.228 qpair failed and we were unable to recover it. 00:29:05.228 [2024-07-24 20:08:53.092639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.228 [2024-07-24 20:08:53.092649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.228 qpair failed and we were unable to recover it. 00:29:05.228 [2024-07-24 20:08:53.093080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.228 [2024-07-24 20:08:53.093088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.228 qpair failed and we were unable to recover it. 00:29:05.228 [2024-07-24 20:08:53.093417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.228 [2024-07-24 20:08:53.093425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.228 qpair failed and we were unable to recover it. 00:29:05.228 [2024-07-24 20:08:53.093930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.228 [2024-07-24 20:08:53.093936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.228 qpair failed and we were unable to recover it. 00:29:05.228 [2024-07-24 20:08:53.094341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.228 [2024-07-24 20:08:53.094348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.228 qpair failed and we were unable to recover it. 00:29:05.228 [2024-07-24 20:08:53.094539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.228 [2024-07-24 20:08:53.094547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.228 qpair failed and we were unable to recover it. 00:29:05.228 [2024-07-24 20:08:53.095057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.228 [2024-07-24 20:08:53.095064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.228 qpair failed and we were unable to recover it. 00:29:05.228 [2024-07-24 20:08:53.095469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.228 [2024-07-24 20:08:53.095476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.228 qpair failed and we were unable to recover it. 00:29:05.228 [2024-07-24 20:08:53.095906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.228 [2024-07-24 20:08:53.095914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.228 qpair failed and we were unable to recover it. 00:29:05.228 [2024-07-24 20:08:53.096355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.228 [2024-07-24 20:08:53.096362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.228 qpair failed and we were unable to recover it. 00:29:05.228 [2024-07-24 20:08:53.096579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.228 [2024-07-24 20:08:53.096587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.228 qpair failed and we were unable to recover it. 00:29:05.228 [2024-07-24 20:08:53.097015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.228 [2024-07-24 20:08:53.097022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.228 qpair failed and we were unable to recover it. 00:29:05.228 [2024-07-24 20:08:53.097509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.228 [2024-07-24 20:08:53.097516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.228 qpair failed and we were unable to recover it. 00:29:05.228 [2024-07-24 20:08:53.097917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.228 [2024-07-24 20:08:53.097925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.228 qpair failed and we were unable to recover it. 00:29:05.228 [2024-07-24 20:08:53.098332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.228 [2024-07-24 20:08:53.098339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.228 qpair failed and we were unable to recover it. 00:29:05.228 [2024-07-24 20:08:53.098778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.228 [2024-07-24 20:08:53.098785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.228 qpair failed and we were unable to recover it. 00:29:05.228 [2024-07-24 20:08:53.099278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.228 [2024-07-24 20:08:53.099285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.228 qpair failed and we were unable to recover it. 00:29:05.228 [2024-07-24 20:08:53.099719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.228 [2024-07-24 20:08:53.099726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.228 qpair failed and we were unable to recover it. 00:29:05.228 [2024-07-24 20:08:53.100149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.228 [2024-07-24 20:08:53.100156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.228 qpair failed and we were unable to recover it. 00:29:05.228 [2024-07-24 20:08:53.100435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.228 [2024-07-24 20:08:53.100443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.228 qpair failed and we were unable to recover it. 00:29:05.228 [2024-07-24 20:08:53.100735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.228 [2024-07-24 20:08:53.100741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.228 qpair failed and we were unable to recover it. 00:29:05.228 [2024-07-24 20:08:53.101141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.228 [2024-07-24 20:08:53.101148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.228 qpair failed and we were unable to recover it. 00:29:05.228 [2024-07-24 20:08:53.101612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.228 [2024-07-24 20:08:53.101618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.228 qpair failed and we were unable to recover it. 00:29:05.228 [2024-07-24 20:08:53.102031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.228 [2024-07-24 20:08:53.102038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.228 qpair failed and we were unable to recover it. 00:29:05.228 [2024-07-24 20:08:53.102465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.228 [2024-07-24 20:08:53.102477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.228 qpair failed and we were unable to recover it. 00:29:05.228 [2024-07-24 20:08:53.102875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.228 [2024-07-24 20:08:53.102882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.228 qpair failed and we were unable to recover it. 00:29:05.228 [2024-07-24 20:08:53.103451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.228 [2024-07-24 20:08:53.103479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.228 qpair failed and we were unable to recover it. 00:29:05.228 [2024-07-24 20:08:53.103913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.228 [2024-07-24 20:08:53.103922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.228 qpair failed and we were unable to recover it. 00:29:05.228 [2024-07-24 20:08:53.104439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.228 [2024-07-24 20:08:53.104467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.228 qpair failed and we were unable to recover it. 00:29:05.228 [2024-07-24 20:08:53.104911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.228 [2024-07-24 20:08:53.104920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.228 qpair failed and we were unable to recover it. 00:29:05.228 [2024-07-24 20:08:53.105319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.229 [2024-07-24 20:08:53.105327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.229 qpair failed and we were unable to recover it. 00:29:05.229 [2024-07-24 20:08:53.105732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.229 [2024-07-24 20:08:53.105739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.229 qpair failed and we were unable to recover it. 00:29:05.229 [2024-07-24 20:08:53.106165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.229 [2024-07-24 20:08:53.106172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.229 qpair failed and we were unable to recover it. 00:29:05.229 [2024-07-24 20:08:53.106603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.229 [2024-07-24 20:08:53.106610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.229 qpair failed and we were unable to recover it. 00:29:05.229 [2024-07-24 20:08:53.107016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.229 [2024-07-24 20:08:53.107023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.229 qpair failed and we were unable to recover it. 00:29:05.229 [2024-07-24 20:08:53.107510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.229 [2024-07-24 20:08:53.107538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.229 qpair failed and we were unable to recover it. 00:29:05.229 [2024-07-24 20:08:53.107970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.229 [2024-07-24 20:08:53.107980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.229 qpair failed and we were unable to recover it. 00:29:05.229 [2024-07-24 20:08:53.108480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.229 [2024-07-24 20:08:53.108508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.229 qpair failed and we were unable to recover it. 00:29:05.229 [2024-07-24 20:08:53.108921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.229 [2024-07-24 20:08:53.108930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.229 qpair failed and we were unable to recover it. 00:29:05.229 [2024-07-24 20:08:53.109432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.229 [2024-07-24 20:08:53.109459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.229 qpair failed and we were unable to recover it. 00:29:05.229 [2024-07-24 20:08:53.109884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.229 [2024-07-24 20:08:53.109892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.229 qpair failed and we were unable to recover it. 00:29:05.229 [2024-07-24 20:08:53.110298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.229 [2024-07-24 20:08:53.110306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.229 qpair failed and we were unable to recover it. 00:29:05.229 [2024-07-24 20:08:53.110752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.229 [2024-07-24 20:08:53.110759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.229 qpair failed and we were unable to recover it. 00:29:05.229 [2024-07-24 20:08:53.111158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.229 [2024-07-24 20:08:53.111164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.229 qpair failed and we were unable to recover it. 00:29:05.229 [2024-07-24 20:08:53.111615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.229 [2024-07-24 20:08:53.111623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.229 qpair failed and we were unable to recover it. 00:29:05.229 [2024-07-24 20:08:53.111934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.229 [2024-07-24 20:08:53.111942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.229 qpair failed and we were unable to recover it. 00:29:05.229 [2024-07-24 20:08:53.112492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.229 [2024-07-24 20:08:53.112520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.229 qpair failed and we were unable to recover it. 00:29:05.229 [2024-07-24 20:08:53.112938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.229 [2024-07-24 20:08:53.112947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.229 qpair failed and we were unable to recover it. 00:29:05.229 [2024-07-24 20:08:53.113396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.229 [2024-07-24 20:08:53.113423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.229 qpair failed and we were unable to recover it. 00:29:05.229 [2024-07-24 20:08:53.113879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.229 [2024-07-24 20:08:53.113888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.229 qpair failed and we were unable to recover it. 00:29:05.229 [2024-07-24 20:08:53.114198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.229 [2024-07-24 20:08:53.114216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.229 qpair failed and we were unable to recover it. 00:29:05.229 [2024-07-24 20:08:53.114668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.229 [2024-07-24 20:08:53.114675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.229 qpair failed and we were unable to recover it. 00:29:05.229 [2024-07-24 20:08:53.115097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.229 [2024-07-24 20:08:53.115104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.229 qpair failed and we were unable to recover it. 00:29:05.229 [2024-07-24 20:08:53.115529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.229 [2024-07-24 20:08:53.115536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.229 qpair failed and we were unable to recover it. 00:29:05.229 [2024-07-24 20:08:53.115869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.229 [2024-07-24 20:08:53.115877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.229 qpair failed and we were unable to recover it. 00:29:05.229 [2024-07-24 20:08:53.116180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.229 [2024-07-24 20:08:53.116187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.229 qpair failed and we were unable to recover it. 00:29:05.229 [2024-07-24 20:08:53.116592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.229 [2024-07-24 20:08:53.116599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.229 qpair failed and we were unable to recover it. 00:29:05.229 [2024-07-24 20:08:53.116826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.229 [2024-07-24 20:08:53.116832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.229 qpair failed and we were unable to recover it. 00:29:05.229 [2024-07-24 20:08:53.117256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.229 [2024-07-24 20:08:53.117263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.229 qpair failed and we were unable to recover it. 00:29:05.229 [2024-07-24 20:08:53.117697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.229 [2024-07-24 20:08:53.117703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.229 qpair failed and we were unable to recover it. 00:29:05.229 [2024-07-24 20:08:53.118151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.229 [2024-07-24 20:08:53.118158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.229 qpair failed and we were unable to recover it. 00:29:05.229 [2024-07-24 20:08:53.118545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.229 [2024-07-24 20:08:53.118552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.229 qpair failed and we were unable to recover it. 00:29:05.229 [2024-07-24 20:08:53.118977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.229 [2024-07-24 20:08:53.118983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.229 qpair failed and we were unable to recover it. 00:29:05.229 [2024-07-24 20:08:53.119510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.229 [2024-07-24 20:08:53.119538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.229 qpair failed and we were unable to recover it. 00:29:05.229 [2024-07-24 20:08:53.119971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.229 [2024-07-24 20:08:53.119983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.229 qpair failed and we were unable to recover it. 00:29:05.229 [2024-07-24 20:08:53.120483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.229 [2024-07-24 20:08:53.120510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.229 qpair failed and we were unable to recover it. 00:29:05.229 [2024-07-24 20:08:53.120837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.229 [2024-07-24 20:08:53.120845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.229 qpair failed and we were unable to recover it. 00:29:05.229 [2024-07-24 20:08:53.121244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.230 [2024-07-24 20:08:53.121251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.230 qpair failed and we were unable to recover it. 00:29:05.230 [2024-07-24 20:08:53.121657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.230 [2024-07-24 20:08:53.121664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.230 qpair failed and we were unable to recover it. 00:29:05.230 [2024-07-24 20:08:53.122103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.230 [2024-07-24 20:08:53.122109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.230 qpair failed and we were unable to recover it. 00:29:05.230 [2024-07-24 20:08:53.122536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.230 [2024-07-24 20:08:53.122543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.230 qpair failed and we were unable to recover it. 00:29:05.230 [2024-07-24 20:08:53.122944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.230 [2024-07-24 20:08:53.122951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.230 qpair failed and we were unable to recover it. 00:29:05.230 [2024-07-24 20:08:53.123353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.230 [2024-07-24 20:08:53.123360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.230 qpair failed and we were unable to recover it. 00:29:05.230 [2024-07-24 20:08:53.123814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.230 [2024-07-24 20:08:53.123821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.230 qpair failed and we were unable to recover it. 00:29:05.230 [2024-07-24 20:08:53.124296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.230 [2024-07-24 20:08:53.124303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.230 qpair failed and we were unable to recover it. 00:29:05.230 [2024-07-24 20:08:53.124746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.230 [2024-07-24 20:08:53.124754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.230 qpair failed and we were unable to recover it. 00:29:05.230 [2024-07-24 20:08:53.125182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.230 [2024-07-24 20:08:53.125189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.230 qpair failed and we were unable to recover it. 00:29:05.230 [2024-07-24 20:08:53.125653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.230 [2024-07-24 20:08:53.125659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.230 qpair failed and we were unable to recover it. 00:29:05.230 [2024-07-24 20:08:53.126059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.230 [2024-07-24 20:08:53.126067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.230 qpair failed and we were unable to recover it. 00:29:05.230 [2024-07-24 20:08:53.126419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.230 [2024-07-24 20:08:53.126446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.230 qpair failed and we were unable to recover it. 00:29:05.230 [2024-07-24 20:08:53.126838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.230 [2024-07-24 20:08:53.126847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.230 qpair failed and we were unable to recover it. 00:29:05.230 [2024-07-24 20:08:53.127297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.230 [2024-07-24 20:08:53.127305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.230 qpair failed and we were unable to recover it. 00:29:05.230 [2024-07-24 20:08:53.127729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.230 [2024-07-24 20:08:53.127737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.230 qpair failed and we were unable to recover it. 00:29:05.230 [2024-07-24 20:08:53.128161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.230 [2024-07-24 20:08:53.128168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.230 qpair failed and we were unable to recover it. 00:29:05.230 [2024-07-24 20:08:53.128570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.230 [2024-07-24 20:08:53.128577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.230 qpair failed and we were unable to recover it. 00:29:05.230 [2024-07-24 20:08:53.128855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.230 [2024-07-24 20:08:53.128863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.230 qpair failed and we were unable to recover it. 00:29:05.230 [2024-07-24 20:08:53.129285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.230 [2024-07-24 20:08:53.129293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.230 qpair failed and we were unable to recover it. 00:29:05.230 [2024-07-24 20:08:53.129578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.230 [2024-07-24 20:08:53.129585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.230 qpair failed and we were unable to recover it. 00:29:05.230 [2024-07-24 20:08:53.130002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.230 [2024-07-24 20:08:53.130008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.230 qpair failed and we were unable to recover it. 00:29:05.230 [2024-07-24 20:08:53.130425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.230 [2024-07-24 20:08:53.130432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.230 qpair failed and we were unable to recover it. 00:29:05.230 [2024-07-24 20:08:53.130845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.230 [2024-07-24 20:08:53.130852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.230 qpair failed and we were unable to recover it. 00:29:05.230 [2024-07-24 20:08:53.131257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.230 [2024-07-24 20:08:53.131265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.230 qpair failed and we were unable to recover it. 00:29:05.230 [2024-07-24 20:08:53.131671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.230 [2024-07-24 20:08:53.131678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.230 qpair failed and we were unable to recover it. 00:29:05.230 [2024-07-24 20:08:53.132076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.230 [2024-07-24 20:08:53.132083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.230 qpair failed and we were unable to recover it. 00:29:05.230 [2024-07-24 20:08:53.132485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.230 [2024-07-24 20:08:53.132493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.230 qpair failed and we were unable to recover it. 00:29:05.230 [2024-07-24 20:08:53.132922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.230 [2024-07-24 20:08:53.132929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.230 qpair failed and we were unable to recover it. 00:29:05.230 [2024-07-24 20:08:53.133465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.230 [2024-07-24 20:08:53.133493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.230 qpair failed and we were unable to recover it. 00:29:05.230 [2024-07-24 20:08:53.133793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.230 [2024-07-24 20:08:53.133801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.230 qpair failed and we were unable to recover it. 00:29:05.230 [2024-07-24 20:08:53.134119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.230 [2024-07-24 20:08:53.134126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.230 qpair failed and we were unable to recover it. 00:29:05.230 [2024-07-24 20:08:53.134540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.230 [2024-07-24 20:08:53.134547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.230 qpair failed and we were unable to recover it. 00:29:05.230 [2024-07-24 20:08:53.134876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.230 [2024-07-24 20:08:53.134883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.230 qpair failed and we were unable to recover it. 00:29:05.230 [2024-07-24 20:08:53.135197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.230 [2024-07-24 20:08:53.135216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.230 qpair failed and we were unable to recover it. 00:29:05.230 [2024-07-24 20:08:53.135525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.230 [2024-07-24 20:08:53.135531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.230 qpair failed and we were unable to recover it. 00:29:05.230 [2024-07-24 20:08:53.135944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.230 [2024-07-24 20:08:53.135952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.230 qpair failed and we were unable to recover it. 00:29:05.230 [2024-07-24 20:08:53.136425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.231 [2024-07-24 20:08:53.136456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.231 qpair failed and we were unable to recover it. 00:29:05.231 [2024-07-24 20:08:53.136869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.231 [2024-07-24 20:08:53.136877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.231 qpair failed and we were unable to recover it. 00:29:05.231 [2024-07-24 20:08:53.137287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.231 [2024-07-24 20:08:53.137295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.231 qpair failed and we were unable to recover it. 00:29:05.231 [2024-07-24 20:08:53.137763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.231 [2024-07-24 20:08:53.137770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.231 qpair failed and we were unable to recover it. 00:29:05.231 [2024-07-24 20:08:53.138163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.231 [2024-07-24 20:08:53.138171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.231 qpair failed and we were unable to recover it. 00:29:05.231 [2024-07-24 20:08:53.138650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.231 [2024-07-24 20:08:53.138657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.231 qpair failed and we were unable to recover it. 00:29:05.231 [2024-07-24 20:08:53.139098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.231 [2024-07-24 20:08:53.139104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.231 qpair failed and we were unable to recover it. 00:29:05.231 [2024-07-24 20:08:53.139534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.231 [2024-07-24 20:08:53.139542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.231 qpair failed and we were unable to recover it. 00:29:05.231 [2024-07-24 20:08:53.139966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.231 [2024-07-24 20:08:53.139973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.231 qpair failed and we were unable to recover it. 00:29:05.231 [2024-07-24 20:08:53.140185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.231 [2024-07-24 20:08:53.140195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.231 qpair failed and we were unable to recover it. 00:29:05.231 [2024-07-24 20:08:53.140633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.231 [2024-07-24 20:08:53.140641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.231 qpair failed and we were unable to recover it. 00:29:05.231 [2024-07-24 20:08:53.141066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.231 [2024-07-24 20:08:53.141074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.231 qpair failed and we were unable to recover it. 00:29:05.231 [2024-07-24 20:08:53.141575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.231 [2024-07-24 20:08:53.141603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.231 qpair failed and we were unable to recover it. 00:29:05.231 [2024-07-24 20:08:53.142048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.231 [2024-07-24 20:08:53.142057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.231 qpair failed and we were unable to recover it. 00:29:05.231 [2024-07-24 20:08:53.142461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.231 [2024-07-24 20:08:53.142488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.231 qpair failed and we were unable to recover it. 00:29:05.231 [2024-07-24 20:08:53.142965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.231 [2024-07-24 20:08:53.142974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.231 qpair failed and we were unable to recover it. 00:29:05.231 [2024-07-24 20:08:53.143493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.231 [2024-07-24 20:08:53.143520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.231 qpair failed and we were unable to recover it. 00:29:05.231 [2024-07-24 20:08:53.143967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.231 [2024-07-24 20:08:53.143976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.231 qpair failed and we were unable to recover it. 00:29:05.231 [2024-07-24 20:08:53.144513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.231 [2024-07-24 20:08:53.144541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.231 qpair failed and we were unable to recover it. 00:29:05.231 [2024-07-24 20:08:53.144960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.231 [2024-07-24 20:08:53.144969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.231 qpair failed and we were unable to recover it. 00:29:05.231 [2024-07-24 20:08:53.145178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.231 [2024-07-24 20:08:53.145187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.231 qpair failed and we were unable to recover it. 00:29:05.231 [2024-07-24 20:08:53.145621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.231 [2024-07-24 20:08:53.145628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.231 qpair failed and we were unable to recover it. 00:29:05.231 [2024-07-24 20:08:53.145834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.231 [2024-07-24 20:08:53.145842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.231 qpair failed and we were unable to recover it. 00:29:05.231 [2024-07-24 20:08:53.146285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.231 [2024-07-24 20:08:53.146293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.231 qpair failed and we were unable to recover it. 00:29:05.231 [2024-07-24 20:08:53.146708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.231 [2024-07-24 20:08:53.146715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.231 qpair failed and we were unable to recover it. 00:29:05.231 [2024-07-24 20:08:53.147127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.231 [2024-07-24 20:08:53.147134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.231 qpair failed and we were unable to recover it. 00:29:05.231 [2024-07-24 20:08:53.147545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.231 [2024-07-24 20:08:53.147552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.231 qpair failed and we were unable to recover it. 00:29:05.231 [2024-07-24 20:08:53.147955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.231 [2024-07-24 20:08:53.147962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.231 qpair failed and we were unable to recover it. 00:29:05.231 [2024-07-24 20:08:53.148156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.231 [2024-07-24 20:08:53.148164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.231 qpair failed and we were unable to recover it. 00:29:05.231 [2024-07-24 20:08:53.148554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.231 [2024-07-24 20:08:53.148562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.231 qpair failed and we were unable to recover it. 00:29:05.231 [2024-07-24 20:08:53.148971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.231 [2024-07-24 20:08:53.148978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.231 qpair failed and we were unable to recover it. 00:29:05.231 [2024-07-24 20:08:53.149180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.231 [2024-07-24 20:08:53.149188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.231 qpair failed and we were unable to recover it. 00:29:05.231 [2024-07-24 20:08:53.149614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.231 [2024-07-24 20:08:53.149622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.231 qpair failed and we were unable to recover it. 00:29:05.231 [2024-07-24 20:08:53.150065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.231 [2024-07-24 20:08:53.150073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.231 qpair failed and we were unable to recover it. 00:29:05.231 [2024-07-24 20:08:53.150596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.231 [2024-07-24 20:08:53.150624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.231 qpair failed and we were unable to recover it. 00:29:05.231 [2024-07-24 20:08:53.151041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.231 [2024-07-24 20:08:53.151051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.231 qpair failed and we were unable to recover it. 00:29:05.231 [2024-07-24 20:08:53.151581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.231 [2024-07-24 20:08:53.151609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.231 qpair failed and we were unable to recover it. 00:29:05.231 [2024-07-24 20:08:53.151899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.231 [2024-07-24 20:08:53.151907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.232 qpair failed and we were unable to recover it. 00:29:05.232 [2024-07-24 20:08:53.152443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.232 [2024-07-24 20:08:53.152470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.232 qpair failed and we were unable to recover it. 00:29:05.232 [2024-07-24 20:08:53.152946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.232 [2024-07-24 20:08:53.152955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.232 qpair failed and we were unable to recover it. 00:29:05.232 [2024-07-24 20:08:53.153490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.232 [2024-07-24 20:08:53.153521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.232 qpair failed and we were unable to recover it. 00:29:05.232 [2024-07-24 20:08:53.153997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.232 [2024-07-24 20:08:53.154005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.232 qpair failed and we were unable to recover it. 00:29:05.232 [2024-07-24 20:08:53.154540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.232 [2024-07-24 20:08:53.154568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.232 qpair failed and we were unable to recover it. 00:29:05.232 [2024-07-24 20:08:53.154897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.232 [2024-07-24 20:08:53.154906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.232 qpair failed and we were unable to recover it. 00:29:05.232 [2024-07-24 20:08:53.155448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.232 [2024-07-24 20:08:53.155476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.232 qpair failed and we were unable to recover it. 00:29:05.232 [2024-07-24 20:08:53.155787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.232 [2024-07-24 20:08:53.155796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.232 qpair failed and we were unable to recover it. 00:29:05.232 [2024-07-24 20:08:53.156213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.232 [2024-07-24 20:08:53.156221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.232 qpair failed and we were unable to recover it. 00:29:05.232 [2024-07-24 20:08:53.156644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.232 [2024-07-24 20:08:53.156651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.232 qpair failed and we were unable to recover it. 00:29:05.506 [2024-07-24 20:08:53.157075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.506 [2024-07-24 20:08:53.157085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.506 qpair failed and we were unable to recover it. 00:29:05.506 [2024-07-24 20:08:53.157404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.506 [2024-07-24 20:08:53.157412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.506 qpair failed and we were unable to recover it. 00:29:05.506 [2024-07-24 20:08:53.158184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.506 [2024-07-24 20:08:53.158208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.506 qpair failed and we were unable to recover it. 00:29:05.506 [2024-07-24 20:08:53.158610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.506 [2024-07-24 20:08:53.158618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.506 qpair failed and we were unable to recover it. 00:29:05.506 [2024-07-24 20:08:53.159133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.506 [2024-07-24 20:08:53.159147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.506 qpair failed and we were unable to recover it. 00:29:05.506 [2024-07-24 20:08:53.159550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.506 [2024-07-24 20:08:53.159559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.506 qpair failed and we were unable to recover it. 00:29:05.506 [2024-07-24 20:08:53.160006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.506 [2024-07-24 20:08:53.160013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.506 qpair failed and we were unable to recover it. 00:29:05.506 [2024-07-24 20:08:53.160431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.506 [2024-07-24 20:08:53.160438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.506 qpair failed and we were unable to recover it. 00:29:05.506 [2024-07-24 20:08:53.160747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.506 [2024-07-24 20:08:53.160755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.506 qpair failed and we were unable to recover it. 00:29:05.506 [2024-07-24 20:08:53.161226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.506 [2024-07-24 20:08:53.161233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.506 qpair failed and we were unable to recover it. 00:29:05.506 [2024-07-24 20:08:53.161662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.506 [2024-07-24 20:08:53.161669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.506 qpair failed and we were unable to recover it. 00:29:05.506 [2024-07-24 20:08:53.162087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.506 [2024-07-24 20:08:53.162094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.506 qpair failed and we were unable to recover it. 00:29:05.506 [2024-07-24 20:08:53.162494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.506 [2024-07-24 20:08:53.162502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.506 qpair failed and we were unable to recover it. 00:29:05.506 [2024-07-24 20:08:53.162905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.506 [2024-07-24 20:08:53.162912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.506 qpair failed and we were unable to recover it. 00:29:05.506 [2024-07-24 20:08:53.163325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.506 [2024-07-24 20:08:53.163332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.506 qpair failed and we were unable to recover it. 00:29:05.506 [2024-07-24 20:08:53.163758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.506 [2024-07-24 20:08:53.163764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.506 qpair failed and we were unable to recover it. 00:29:05.506 [2024-07-24 20:08:53.164218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.506 [2024-07-24 20:08:53.164225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.506 qpair failed and we were unable to recover it. 00:29:05.506 [2024-07-24 20:08:53.164568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.506 [2024-07-24 20:08:53.164575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.506 qpair failed and we were unable to recover it. 00:29:05.506 [2024-07-24 20:08:53.165004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.506 [2024-07-24 20:08:53.165011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.506 qpair failed and we were unable to recover it. 00:29:05.506 [2024-07-24 20:08:53.165431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.506 [2024-07-24 20:08:53.165439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.506 qpair failed and we were unable to recover it. 00:29:05.506 [2024-07-24 20:08:53.165837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.506 [2024-07-24 20:08:53.165844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.506 qpair failed and we were unable to recover it. 00:29:05.506 [2024-07-24 20:08:53.166256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.506 [2024-07-24 20:08:53.166265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.506 qpair failed and we were unable to recover it. 00:29:05.506 [2024-07-24 20:08:53.166700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.507 [2024-07-24 20:08:53.166707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.507 qpair failed and we were unable to recover it. 00:29:05.507 [2024-07-24 20:08:53.167072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.507 [2024-07-24 20:08:53.167086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.507 qpair failed and we were unable to recover it. 00:29:05.507 [2024-07-24 20:08:53.167389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.507 [2024-07-24 20:08:53.167397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.507 qpair failed and we were unable to recover it. 00:29:05.507 [2024-07-24 20:08:53.167822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.507 [2024-07-24 20:08:53.167828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.507 qpair failed and we were unable to recover it. 00:29:05.507 [2024-07-24 20:08:53.168144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.507 [2024-07-24 20:08:53.168150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.507 qpair failed and we were unable to recover it. 00:29:05.507 [2024-07-24 20:08:53.168551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.507 [2024-07-24 20:08:53.168558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.507 qpair failed and we were unable to recover it. 00:29:05.507 [2024-07-24 20:08:53.168965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.507 [2024-07-24 20:08:53.168971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.507 qpair failed and we were unable to recover it. 00:29:05.507 [2024-07-24 20:08:53.169401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.507 [2024-07-24 20:08:53.169408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.507 qpair failed and we were unable to recover it. 00:29:05.507 [2024-07-24 20:08:53.169815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.507 [2024-07-24 20:08:53.169822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.507 qpair failed and we were unable to recover it. 00:29:05.507 [2024-07-24 20:08:53.170165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.507 [2024-07-24 20:08:53.170172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.507 qpair failed and we were unable to recover it. 00:29:05.507 [2024-07-24 20:08:53.170561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.507 [2024-07-24 20:08:53.170576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.507 qpair failed and we were unable to recover it. 00:29:05.507 [2024-07-24 20:08:53.170962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.507 [2024-07-24 20:08:53.170968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.507 qpair failed and we were unable to recover it. 00:29:05.507 [2024-07-24 20:08:53.171486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.507 [2024-07-24 20:08:53.171513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.507 qpair failed and we were unable to recover it. 00:29:05.507 [2024-07-24 20:08:53.171861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.507 [2024-07-24 20:08:53.171869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.507 qpair failed and we were unable to recover it. 00:29:05.507 [2024-07-24 20:08:53.172238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.507 [2024-07-24 20:08:53.172246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.507 qpair failed and we were unable to recover it. 00:29:05.507 [2024-07-24 20:08:53.172626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.507 [2024-07-24 20:08:53.172633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.507 qpair failed and we were unable to recover it. 00:29:05.507 [2024-07-24 20:08:53.173086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.507 [2024-07-24 20:08:53.173094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.507 qpair failed and we were unable to recover it. 00:29:05.507 [2024-07-24 20:08:53.173578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.507 [2024-07-24 20:08:53.173585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.507 qpair failed and we were unable to recover it. 00:29:05.507 [2024-07-24 20:08:53.174002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.507 [2024-07-24 20:08:53.174008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.507 qpair failed and we were unable to recover it. 00:29:05.507 [2024-07-24 20:08:53.174293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.507 [2024-07-24 20:08:53.174301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.507 qpair failed and we were unable to recover it. 00:29:05.507 [2024-07-24 20:08:53.174742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.507 [2024-07-24 20:08:53.174748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.507 qpair failed and we were unable to recover it. 00:29:05.507 [2024-07-24 20:08:53.175192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.507 [2024-07-24 20:08:53.175198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.507 qpair failed and we were unable to recover it. 00:29:05.507 [2024-07-24 20:08:53.175613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.507 [2024-07-24 20:08:53.175620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.507 qpair failed and we were unable to recover it. 00:29:05.507 [2024-07-24 20:08:53.176022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.507 [2024-07-24 20:08:53.176028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.507 qpair failed and we were unable to recover it. 00:29:05.507 [2024-07-24 20:08:53.176596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.507 [2024-07-24 20:08:53.176624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.507 qpair failed and we were unable to recover it. 00:29:05.508 [2024-07-24 20:08:53.177108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.508 [2024-07-24 20:08:53.177116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.508 qpair failed and we were unable to recover it. 00:29:05.508 [2024-07-24 20:08:53.177546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.508 [2024-07-24 20:08:53.177553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.508 qpair failed and we were unable to recover it. 00:29:05.508 [2024-07-24 20:08:53.178029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.508 [2024-07-24 20:08:53.178035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.508 qpair failed and we were unable to recover it. 00:29:05.508 [2024-07-24 20:08:53.178585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.508 [2024-07-24 20:08:53.178612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.508 qpair failed and we were unable to recover it. 00:29:05.508 [2024-07-24 20:08:53.179096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.508 [2024-07-24 20:08:53.179104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.508 qpair failed and we were unable to recover it. 00:29:05.508 [2024-07-24 20:08:53.179585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.508 [2024-07-24 20:08:53.179593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.508 qpair failed and we were unable to recover it. 00:29:05.508 [2024-07-24 20:08:53.180000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.508 [2024-07-24 20:08:53.180006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.508 qpair failed and we were unable to recover it. 00:29:05.508 [2024-07-24 20:08:53.180545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.508 [2024-07-24 20:08:53.180573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.508 qpair failed and we were unable to recover it. 00:29:05.508 [2024-07-24 20:08:53.180988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.508 [2024-07-24 20:08:53.180998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.508 qpair failed and we were unable to recover it. 00:29:05.508 [2024-07-24 20:08:53.181462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.508 [2024-07-24 20:08:53.181491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.508 qpair failed and we were unable to recover it. 00:29:05.508 [2024-07-24 20:08:53.181925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.508 [2024-07-24 20:08:53.181933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.508 qpair failed and we were unable to recover it. 00:29:05.508 [2024-07-24 20:08:53.182447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.508 [2024-07-24 20:08:53.182474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.508 qpair failed and we were unable to recover it. 00:29:05.508 [2024-07-24 20:08:53.182893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.508 [2024-07-24 20:08:53.182902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.508 qpair failed and we were unable to recover it. 00:29:05.508 [2024-07-24 20:08:53.183437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.508 [2024-07-24 20:08:53.183465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.508 qpair failed and we were unable to recover it. 00:29:05.508 [2024-07-24 20:08:53.183893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.508 [2024-07-24 20:08:53.183901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.508 qpair failed and we were unable to recover it. 00:29:05.508 [2024-07-24 20:08:53.184302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.508 [2024-07-24 20:08:53.184310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.508 qpair failed and we were unable to recover it. 00:29:05.508 [2024-07-24 20:08:53.184803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.508 [2024-07-24 20:08:53.184810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.508 qpair failed and we were unable to recover it. 00:29:05.508 [2024-07-24 20:08:53.185217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.508 [2024-07-24 20:08:53.185224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.508 qpair failed and we were unable to recover it. 00:29:05.508 [2024-07-24 20:08:53.185582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.508 [2024-07-24 20:08:53.185589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.508 qpair failed and we were unable to recover it. 00:29:05.508 [2024-07-24 20:08:53.186020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.508 [2024-07-24 20:08:53.186027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.508 qpair failed and we were unable to recover it. 00:29:05.508 [2024-07-24 20:08:53.186373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.508 [2024-07-24 20:08:53.186380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.508 qpair failed and we were unable to recover it. 00:29:05.508 [2024-07-24 20:08:53.186831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.508 [2024-07-24 20:08:53.186837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.508 qpair failed and we were unable to recover it. 00:29:05.508 [2024-07-24 20:08:53.187114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.508 [2024-07-24 20:08:53.187123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.508 qpair failed and we were unable to recover it. 00:29:05.508 [2024-07-24 20:08:53.187519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.508 [2024-07-24 20:08:53.187527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.509 qpair failed and we were unable to recover it. 00:29:05.509 [2024-07-24 20:08:53.187971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.509 [2024-07-24 20:08:53.187979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.509 qpair failed and we were unable to recover it. 00:29:05.509 [2024-07-24 20:08:53.188481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.509 [2024-07-24 20:08:53.188491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.509 qpair failed and we were unable to recover it. 00:29:05.509 [2024-07-24 20:08:53.188890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.509 [2024-07-24 20:08:53.188896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.509 qpair failed and we were unable to recover it. 00:29:05.509 [2024-07-24 20:08:53.189102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.509 [2024-07-24 20:08:53.189112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.509 qpair failed and we were unable to recover it. 00:29:05.509 [2024-07-24 20:08:53.189580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.509 [2024-07-24 20:08:53.189587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.509 qpair failed and we were unable to recover it. 00:29:05.509 [2024-07-24 20:08:53.189787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.509 [2024-07-24 20:08:53.189795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.509 qpair failed and we were unable to recover it. 00:29:05.509 [2024-07-24 20:08:53.190002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.509 [2024-07-24 20:08:53.190011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.509 qpair failed and we were unable to recover it. 00:29:05.509 [2024-07-24 20:08:53.190448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.509 [2024-07-24 20:08:53.190456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.509 qpair failed and we were unable to recover it. 00:29:05.509 [2024-07-24 20:08:53.190654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.509 [2024-07-24 20:08:53.190662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.509 qpair failed and we were unable to recover it. 00:29:05.509 [2024-07-24 20:08:53.191123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.509 [2024-07-24 20:08:53.191130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.509 qpair failed and we were unable to recover it. 00:29:05.509 [2024-07-24 20:08:53.191607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.509 [2024-07-24 20:08:53.191615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.509 qpair failed and we were unable to recover it. 00:29:05.509 [2024-07-24 20:08:53.191809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.509 [2024-07-24 20:08:53.191816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.509 qpair failed and we were unable to recover it. 00:29:05.509 [2024-07-24 20:08:53.192238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.509 [2024-07-24 20:08:53.192245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.509 qpair failed and we were unable to recover it. 00:29:05.509 [2024-07-24 20:08:53.192732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.509 [2024-07-24 20:08:53.192739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.509 qpair failed and we were unable to recover it. 00:29:05.509 [2024-07-24 20:08:53.193136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.509 [2024-07-24 20:08:53.193143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.509 qpair failed and we were unable to recover it. 00:29:05.509 [2024-07-24 20:08:53.193355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.509 [2024-07-24 20:08:53.193364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.509 qpair failed and we were unable to recover it. 00:29:05.509 [2024-07-24 20:08:53.193815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.509 [2024-07-24 20:08:53.193822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.509 qpair failed and we were unable to recover it. 00:29:05.509 [2024-07-24 20:08:53.194230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.509 [2024-07-24 20:08:53.194237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.509 qpair failed and we were unable to recover it. 00:29:05.509 [2024-07-24 20:08:53.194454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.509 [2024-07-24 20:08:53.194461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.509 qpair failed and we were unable to recover it. 00:29:05.509 [2024-07-24 20:08:53.194741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.509 [2024-07-24 20:08:53.194749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.509 qpair failed and we were unable to recover it. 00:29:05.509 [2024-07-24 20:08:53.195189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.509 [2024-07-24 20:08:53.195196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.509 qpair failed and we were unable to recover it. 00:29:05.509 [2024-07-24 20:08:53.195625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.509 [2024-07-24 20:08:53.195633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.509 qpair failed and we were unable to recover it. 00:29:05.509 [2024-07-24 20:08:53.196071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.509 [2024-07-24 20:08:53.196079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.509 qpair failed and we were unable to recover it. 00:29:05.509 [2024-07-24 20:08:53.196598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.509 [2024-07-24 20:08:53.196605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.509 qpair failed and we were unable to recover it. 00:29:05.509 [2024-07-24 20:08:53.196998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.509 [2024-07-24 20:08:53.197004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.509 qpair failed and we were unable to recover it. 00:29:05.509 [2024-07-24 20:08:53.197543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.510 [2024-07-24 20:08:53.197571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.510 qpair failed and we were unable to recover it. 00:29:05.510 [2024-07-24 20:08:53.197986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.510 [2024-07-24 20:08:53.197995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.510 qpair failed and we were unable to recover it. 00:29:05.510 [2024-07-24 20:08:53.198548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.510 [2024-07-24 20:08:53.198576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.510 qpair failed and we were unable to recover it. 00:29:05.510 [2024-07-24 20:08:53.198970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.510 [2024-07-24 20:08:53.198978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.510 qpair failed and we were unable to recover it. 00:29:05.510 [2024-07-24 20:08:53.199498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.510 [2024-07-24 20:08:53.199525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.510 qpair failed and we were unable to recover it. 00:29:05.510 [2024-07-24 20:08:53.199851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.510 [2024-07-24 20:08:53.199860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.510 qpair failed and we were unable to recover it. 00:29:05.510 [2024-07-24 20:08:53.200297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.510 [2024-07-24 20:08:53.200305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.510 qpair failed and we were unable to recover it. 00:29:05.510 [2024-07-24 20:08:53.200726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.510 [2024-07-24 20:08:53.200733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.510 qpair failed and we were unable to recover it. 00:29:05.510 [2024-07-24 20:08:53.201164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.510 [2024-07-24 20:08:53.201170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.510 qpair failed and we were unable to recover it. 00:29:05.510 [2024-07-24 20:08:53.201598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.510 [2024-07-24 20:08:53.201606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.510 qpair failed and we were unable to recover it. 00:29:05.510 [2024-07-24 20:08:53.202009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.510 [2024-07-24 20:08:53.202016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.510 qpair failed and we were unable to recover it. 00:29:05.510 [2024-07-24 20:08:53.202550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.510 [2024-07-24 20:08:53.202577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.510 qpair failed and we were unable to recover it. 00:29:05.510 [2024-07-24 20:08:53.202999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.510 [2024-07-24 20:08:53.203008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.510 qpair failed and we were unable to recover it. 00:29:05.510 [2024-07-24 20:08:53.203555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.510 [2024-07-24 20:08:53.203582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.510 qpair failed and we were unable to recover it. 00:29:05.510 [2024-07-24 20:08:53.203996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.510 [2024-07-24 20:08:53.204005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.510 qpair failed and we were unable to recover it. 00:29:05.510 [2024-07-24 20:08:53.204537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.510 [2024-07-24 20:08:53.204565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.510 qpair failed and we were unable to recover it. 00:29:05.510 [2024-07-24 20:08:53.204982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.510 [2024-07-24 20:08:53.204990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.510 qpair failed and we were unable to recover it. 00:29:05.510 [2024-07-24 20:08:53.205538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.510 [2024-07-24 20:08:53.205566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.510 qpair failed and we were unable to recover it. 00:29:05.510 [2024-07-24 20:08:53.205985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.510 [2024-07-24 20:08:53.205994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.510 qpair failed and we were unable to recover it. 00:29:05.510 [2024-07-24 20:08:53.206567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.510 [2024-07-24 20:08:53.206594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.510 qpair failed and we were unable to recover it. 00:29:05.510 [2024-07-24 20:08:53.207079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.510 [2024-07-24 20:08:53.207087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.510 qpair failed and we were unable to recover it. 00:29:05.510 [2024-07-24 20:08:53.207564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.510 [2024-07-24 20:08:53.207571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.510 qpair failed and we were unable to recover it. 00:29:05.510 [2024-07-24 20:08:53.207998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.510 [2024-07-24 20:08:53.208005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.510 qpair failed and we were unable to recover it. 00:29:05.510 [2024-07-24 20:08:53.208543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.510 [2024-07-24 20:08:53.208570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.510 qpair failed and we were unable to recover it. 00:29:05.510 [2024-07-24 20:08:53.209064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.510 [2024-07-24 20:08:53.209072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.510 qpair failed and we were unable to recover it. 00:29:05.510 [2024-07-24 20:08:53.209572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.510 [2024-07-24 20:08:53.209600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.510 qpair failed and we were unable to recover it. 00:29:05.510 [2024-07-24 20:08:53.210015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.510 [2024-07-24 20:08:53.210024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.510 qpair failed and we were unable to recover it. 00:29:05.510 [2024-07-24 20:08:53.210533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.510 [2024-07-24 20:08:53.210562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.510 qpair failed and we were unable to recover it. 00:29:05.511 [2024-07-24 20:08:53.210996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.511 [2024-07-24 20:08:53.211005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.511 qpair failed and we were unable to recover it. 00:29:05.511 [2024-07-24 20:08:53.211521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.511 [2024-07-24 20:08:53.211548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.511 qpair failed and we were unable to recover it. 00:29:05.511 [2024-07-24 20:08:53.211972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.511 [2024-07-24 20:08:53.211980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.511 qpair failed and we were unable to recover it. 00:29:05.511 [2024-07-24 20:08:53.212413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.511 [2024-07-24 20:08:53.212441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.511 qpair failed and we were unable to recover it. 00:29:05.511 [2024-07-24 20:08:53.212911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.511 [2024-07-24 20:08:53.212919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.511 qpair failed and we were unable to recover it. 00:29:05.511 [2024-07-24 20:08:53.213449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.511 [2024-07-24 20:08:53.213477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.511 qpair failed and we were unable to recover it. 00:29:05.511 [2024-07-24 20:08:53.213900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.511 [2024-07-24 20:08:53.213908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.511 qpair failed and we were unable to recover it. 00:29:05.511 [2024-07-24 20:08:53.214435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.511 [2024-07-24 20:08:53.214462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.511 qpair failed and we were unable to recover it. 00:29:05.511 [2024-07-24 20:08:53.214878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.511 [2024-07-24 20:08:53.214887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.511 qpair failed and we were unable to recover it. 00:29:05.511 [2024-07-24 20:08:53.215298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.511 [2024-07-24 20:08:53.215306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.511 qpair failed and we were unable to recover it. 00:29:05.511 [2024-07-24 20:08:53.215763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.511 [2024-07-24 20:08:53.215771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.511 qpair failed and we were unable to recover it. 00:29:05.511 [2024-07-24 20:08:53.216214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.511 [2024-07-24 20:08:53.216223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.511 qpair failed and we were unable to recover it. 00:29:05.511 [2024-07-24 20:08:53.216722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.511 [2024-07-24 20:08:53.216728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.511 qpair failed and we were unable to recover it. 00:29:05.511 [2024-07-24 20:08:53.217128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.511 [2024-07-24 20:08:53.217134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.511 qpair failed and we were unable to recover it. 00:29:05.511 [2024-07-24 20:08:53.217547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.511 [2024-07-24 20:08:53.217554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.511 qpair failed and we were unable to recover it. 00:29:05.511 [2024-07-24 20:08:53.217873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.511 [2024-07-24 20:08:53.217883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.511 qpair failed and we were unable to recover it. 00:29:05.511 [2024-07-24 20:08:53.218300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.511 [2024-07-24 20:08:53.218307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.511 qpair failed and we were unable to recover it. 00:29:05.511 [2024-07-24 20:08:53.218736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.511 [2024-07-24 20:08:53.218742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.511 qpair failed and we were unable to recover it. 00:29:05.511 [2024-07-24 20:08:53.219171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.511 [2024-07-24 20:08:53.219178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.511 qpair failed and we were unable to recover it. 00:29:05.511 [2024-07-24 20:08:53.219595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.511 [2024-07-24 20:08:53.219603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.511 qpair failed and we were unable to recover it. 00:29:05.511 [2024-07-24 20:08:53.220033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.511 [2024-07-24 20:08:53.220040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.511 qpair failed and we were unable to recover it. 00:29:05.511 [2024-07-24 20:08:53.220522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.511 [2024-07-24 20:08:53.220551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.511 qpair failed and we were unable to recover it. 00:29:05.511 [2024-07-24 20:08:53.220883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.511 [2024-07-24 20:08:53.220892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.511 qpair failed and we were unable to recover it. 00:29:05.511 [2024-07-24 20:08:53.221295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.511 [2024-07-24 20:08:53.221304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.511 qpair failed and we were unable to recover it. 00:29:05.511 [2024-07-24 20:08:53.221718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.511 [2024-07-24 20:08:53.221725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.511 qpair failed and we were unable to recover it. 00:29:05.511 [2024-07-24 20:08:53.222058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.511 [2024-07-24 20:08:53.222065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.511 qpair failed and we were unable to recover it. 00:29:05.511 [2024-07-24 20:08:53.222482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.511 [2024-07-24 20:08:53.222490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.511 qpair failed and we were unable to recover it. 00:29:05.511 [2024-07-24 20:08:53.222702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.511 [2024-07-24 20:08:53.222709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.511 qpair failed and we were unable to recover it. 00:29:05.511 [2024-07-24 20:08:53.222976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.511 [2024-07-24 20:08:53.222984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.511 qpair failed and we were unable to recover it. 00:29:05.511 [2024-07-24 20:08:53.223395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.512 [2024-07-24 20:08:53.223402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.512 qpair failed and we were unable to recover it. 00:29:05.512 [2024-07-24 20:08:53.223615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.512 [2024-07-24 20:08:53.223625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.512 qpair failed and we were unable to recover it. 00:29:05.512 [2024-07-24 20:08:53.224047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.512 [2024-07-24 20:08:53.224055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.512 qpair failed and we were unable to recover it. 00:29:05.512 [2024-07-24 20:08:53.224480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.512 [2024-07-24 20:08:53.224487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.512 qpair failed and we were unable to recover it. 00:29:05.512 [2024-07-24 20:08:53.224888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.512 [2024-07-24 20:08:53.224894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.512 qpair failed and we were unable to recover it. 00:29:05.512 [2024-07-24 20:08:53.225296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.512 [2024-07-24 20:08:53.225303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.512 qpair failed and we were unable to recover it. 00:29:05.512 [2024-07-24 20:08:53.225720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.512 [2024-07-24 20:08:53.225727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.512 qpair failed and we were unable to recover it. 00:29:05.512 [2024-07-24 20:08:53.226134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.512 [2024-07-24 20:08:53.226141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.512 qpair failed and we were unable to recover it. 00:29:05.512 [2024-07-24 20:08:53.226552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.512 [2024-07-24 20:08:53.226559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.512 qpair failed and we were unable to recover it. 00:29:05.512 [2024-07-24 20:08:53.226976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.512 [2024-07-24 20:08:53.226984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.512 qpair failed and we were unable to recover it. 00:29:05.512 [2024-07-24 20:08:53.227873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.512 [2024-07-24 20:08:53.227891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.512 qpair failed and we were unable to recover it. 00:29:05.512 [2024-07-24 20:08:53.228274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.512 [2024-07-24 20:08:53.228282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.512 qpair failed and we were unable to recover it. 00:29:05.512 [2024-07-24 20:08:53.228673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.512 [2024-07-24 20:08:53.228681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.512 qpair failed and we were unable to recover it. 00:29:05.512 [2024-07-24 20:08:53.229106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.512 [2024-07-24 20:08:53.229113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.512 qpair failed and we were unable to recover it. 00:29:05.512 [2024-07-24 20:08:53.229584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.512 [2024-07-24 20:08:53.229591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.512 qpair failed and we were unable to recover it. 00:29:05.512 [2024-07-24 20:08:53.229947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.512 [2024-07-24 20:08:53.229954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.512 qpair failed and we were unable to recover it. 00:29:05.512 [2024-07-24 20:08:53.230271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.512 [2024-07-24 20:08:53.230278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.512 qpair failed and we were unable to recover it. 00:29:05.512 [2024-07-24 20:08:53.230620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.512 [2024-07-24 20:08:53.230626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.512 qpair failed and we were unable to recover it. 00:29:05.512 [2024-07-24 20:08:53.230841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.513 [2024-07-24 20:08:53.230847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.513 qpair failed and we were unable to recover it. 00:29:05.513 [2024-07-24 20:08:53.231147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.513 [2024-07-24 20:08:53.231153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.513 qpair failed and we were unable to recover it. 00:29:05.513 [2024-07-24 20:08:53.231567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.513 [2024-07-24 20:08:53.231574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.513 qpair failed and we were unable to recover it. 00:29:05.513 [2024-07-24 20:08:53.232028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.513 [2024-07-24 20:08:53.232036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.513 qpair failed and we were unable to recover it. 00:29:05.513 [2024-07-24 20:08:53.232474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.513 [2024-07-24 20:08:53.232482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.513 qpair failed and we were unable to recover it. 00:29:05.513 [2024-07-24 20:08:53.232908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.513 [2024-07-24 20:08:53.232915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.513 qpair failed and we were unable to recover it. 00:29:05.513 [2024-07-24 20:08:53.233470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.513 [2024-07-24 20:08:53.233498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.513 qpair failed and we were unable to recover it. 00:29:05.513 [2024-07-24 20:08:53.233910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.513 [2024-07-24 20:08:53.233918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.513 qpair failed and we were unable to recover it. 00:29:05.513 [2024-07-24 20:08:53.234315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.513 [2024-07-24 20:08:53.234326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.513 qpair failed and we were unable to recover it. 00:29:05.513 [2024-07-24 20:08:53.234753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.513 [2024-07-24 20:08:53.234761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.513 qpair failed and we were unable to recover it. 00:29:05.513 [2024-07-24 20:08:53.235213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.513 [2024-07-24 20:08:53.235221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.513 qpair failed and we were unable to recover it. 00:29:05.513 [2024-07-24 20:08:53.235562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.513 [2024-07-24 20:08:53.235569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.513 qpair failed and we were unable to recover it. 00:29:05.513 [2024-07-24 20:08:53.235985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.513 [2024-07-24 20:08:53.235992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.513 qpair failed and we were unable to recover it. 00:29:05.513 [2024-07-24 20:08:53.236348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.513 [2024-07-24 20:08:53.236355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.513 qpair failed and we were unable to recover it. 00:29:05.513 [2024-07-24 20:08:53.236778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.513 [2024-07-24 20:08:53.236785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.513 qpair failed and we were unable to recover it. 00:29:05.513 [2024-07-24 20:08:53.237236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.513 [2024-07-24 20:08:53.237244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.513 qpair failed and we were unable to recover it. 00:29:05.513 [2024-07-24 20:08:53.237656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.513 [2024-07-24 20:08:53.237664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.513 qpair failed and we were unable to recover it. 00:29:05.513 [2024-07-24 20:08:53.237988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.513 [2024-07-24 20:08:53.237995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.513 qpair failed and we were unable to recover it. 00:29:05.513 [2024-07-24 20:08:53.238442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.513 [2024-07-24 20:08:53.238450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.513 qpair failed and we were unable to recover it. 00:29:05.513 [2024-07-24 20:08:53.238874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.513 [2024-07-24 20:08:53.238881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.513 qpair failed and we were unable to recover it. 00:29:05.513 [2024-07-24 20:08:53.239377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.513 [2024-07-24 20:08:53.239405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.513 qpair failed and we were unable to recover it. 00:29:05.513 [2024-07-24 20:08:53.239880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.513 [2024-07-24 20:08:53.239888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.513 qpair failed and we were unable to recover it. 00:29:05.513 [2024-07-24 20:08:53.240193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.513 [2024-07-24 20:08:53.240216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.513 qpair failed and we were unable to recover it. 00:29:05.513 [2024-07-24 20:08:53.240636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.513 [2024-07-24 20:08:53.240643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.513 qpair failed and we were unable to recover it. 00:29:05.513 [2024-07-24 20:08:53.240877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.513 [2024-07-24 20:08:53.240883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.513 qpair failed and we were unable to recover it. 00:29:05.513 [2024-07-24 20:08:53.241258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.513 [2024-07-24 20:08:53.241266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.513 qpair failed and we were unable to recover it. 00:29:05.513 [2024-07-24 20:08:53.241696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.513 [2024-07-24 20:08:53.241703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.513 qpair failed and we were unable to recover it. 00:29:05.513 [2024-07-24 20:08:53.242157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.513 [2024-07-24 20:08:53.242164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.513 qpair failed and we were unable to recover it. 00:29:05.513 [2024-07-24 20:08:53.242587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.513 [2024-07-24 20:08:53.242594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.513 qpair failed and we were unable to recover it. 00:29:05.513 [2024-07-24 20:08:53.243115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.513 [2024-07-24 20:08:53.243122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.513 qpair failed and we were unable to recover it. 00:29:05.513 [2024-07-24 20:08:53.243512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.513 [2024-07-24 20:08:53.243518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.513 qpair failed and we were unable to recover it. 00:29:05.513 [2024-07-24 20:08:53.243941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.513 [2024-07-24 20:08:53.243948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.513 qpair failed and we were unable to recover it. 00:29:05.514 [2024-07-24 20:08:53.244490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.514 [2024-07-24 20:08:53.244518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.514 qpair failed and we were unable to recover it. 00:29:05.514 [2024-07-24 20:08:53.244850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.514 [2024-07-24 20:08:53.244859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.514 qpair failed and we were unable to recover it. 00:29:05.514 [2024-07-24 20:08:53.245154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.514 [2024-07-24 20:08:53.245162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.514 qpair failed and we were unable to recover it. 00:29:05.514 [2024-07-24 20:08:53.245486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.514 [2024-07-24 20:08:53.245494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.514 qpair failed and we were unable to recover it. 00:29:05.514 [2024-07-24 20:08:53.245927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.514 [2024-07-24 20:08:53.245933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.514 qpair failed and we were unable to recover it. 00:29:05.514 [2024-07-24 20:08:53.246363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.514 [2024-07-24 20:08:53.246370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.514 qpair failed and we were unable to recover it. 00:29:05.514 [2024-07-24 20:08:53.246804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.514 [2024-07-24 20:08:53.246811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.514 qpair failed and we were unable to recover it. 00:29:05.514 [2024-07-24 20:08:53.247244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.514 [2024-07-24 20:08:53.247251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.514 qpair failed and we were unable to recover it. 00:29:05.514 [2024-07-24 20:08:53.247689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.514 [2024-07-24 20:08:53.247695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.514 qpair failed and we were unable to recover it. 00:29:05.514 [2024-07-24 20:08:53.248112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.514 [2024-07-24 20:08:53.248119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.514 qpair failed and we were unable to recover it. 00:29:05.514 [2024-07-24 20:08:53.248429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.514 [2024-07-24 20:08:53.248436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.514 qpair failed and we were unable to recover it. 00:29:05.514 [2024-07-24 20:08:53.248761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.514 [2024-07-24 20:08:53.248768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.514 qpair failed and we were unable to recover it. 00:29:05.514 [2024-07-24 20:08:53.249192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.514 [2024-07-24 20:08:53.249207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.514 qpair failed and we were unable to recover it. 00:29:05.514 [2024-07-24 20:08:53.249571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.514 [2024-07-24 20:08:53.249579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.514 qpair failed and we were unable to recover it. 00:29:05.514 [2024-07-24 20:08:53.250036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.514 [2024-07-24 20:08:53.250043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.514 qpair failed and we were unable to recover it. 00:29:05.514 [2024-07-24 20:08:53.250461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.514 [2024-07-24 20:08:53.250489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.514 qpair failed and we were unable to recover it. 00:29:05.514 [2024-07-24 20:08:53.250943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.514 [2024-07-24 20:08:53.250955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.514 qpair failed and we were unable to recover it. 00:29:05.514 [2024-07-24 20:08:53.251287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.514 [2024-07-24 20:08:53.251295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.514 qpair failed and we were unable to recover it. 00:29:05.514 [2024-07-24 20:08:53.251742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.514 [2024-07-24 20:08:53.251749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.514 qpair failed and we were unable to recover it. 00:29:05.514 [2024-07-24 20:08:53.252169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.514 [2024-07-24 20:08:53.252175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.514 qpair failed and we were unable to recover it. 00:29:05.514 [2024-07-24 20:08:53.252463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.514 [2024-07-24 20:08:53.252470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.514 qpair failed and we were unable to recover it. 00:29:05.514 [2024-07-24 20:08:53.252896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.514 [2024-07-24 20:08:53.252903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.514 qpair failed and we were unable to recover it. 00:29:05.514 [2024-07-24 20:08:53.253226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.514 [2024-07-24 20:08:53.253234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.514 qpair failed and we were unable to recover it. 00:29:05.514 [2024-07-24 20:08:53.253672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.514 [2024-07-24 20:08:53.253678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.514 qpair failed and we were unable to recover it. 00:29:05.514 [2024-07-24 20:08:53.254088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.514 [2024-07-24 20:08:53.254094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.514 qpair failed and we were unable to recover it. 00:29:05.514 [2024-07-24 20:08:53.254529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.514 [2024-07-24 20:08:53.254537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.514 qpair failed and we were unable to recover it. 00:29:05.514 [2024-07-24 20:08:53.254938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.514 [2024-07-24 20:08:53.254944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.514 qpair failed and we were unable to recover it. 00:29:05.514 [2024-07-24 20:08:53.255337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.514 [2024-07-24 20:08:53.255344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.514 qpair failed and we were unable to recover it. 00:29:05.514 [2024-07-24 20:08:53.255762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.515 [2024-07-24 20:08:53.255768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.515 qpair failed and we were unable to recover it. 00:29:05.515 [2024-07-24 20:08:53.256166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.515 [2024-07-24 20:08:53.256172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.515 qpair failed and we were unable to recover it. 00:29:05.515 [2024-07-24 20:08:53.256495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.515 [2024-07-24 20:08:53.256503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.515 qpair failed and we were unable to recover it. 00:29:05.515 [2024-07-24 20:08:53.256923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.515 [2024-07-24 20:08:53.256929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.515 qpair failed and we were unable to recover it. 00:29:05.515 [2024-07-24 20:08:53.257332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.515 [2024-07-24 20:08:53.257339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.515 qpair failed and we were unable to recover it. 00:29:05.515 [2024-07-24 20:08:53.257783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.515 [2024-07-24 20:08:53.257790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.515 qpair failed and we were unable to recover it. 00:29:05.515 [2024-07-24 20:08:53.258224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.515 [2024-07-24 20:08:53.258231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.515 qpair failed and we were unable to recover it. 00:29:05.515 [2024-07-24 20:08:53.258652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.515 [2024-07-24 20:08:53.258659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.515 qpair failed and we were unable to recover it. 00:29:05.515 [2024-07-24 20:08:53.259105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.515 [2024-07-24 20:08:53.259114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.515 qpair failed and we were unable to recover it. 00:29:05.515 [2024-07-24 20:08:53.260046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.515 [2024-07-24 20:08:53.260063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.515 qpair failed and we were unable to recover it. 00:29:05.515 [2024-07-24 20:08:53.260421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.515 [2024-07-24 20:08:53.260430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.515 qpair failed and we were unable to recover it. 00:29:05.515 [2024-07-24 20:08:53.260853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.515 [2024-07-24 20:08:53.260860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.515 qpair failed and we were unable to recover it. 00:29:05.515 [2024-07-24 20:08:53.261254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.515 [2024-07-24 20:08:53.261261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.515 qpair failed and we were unable to recover it. 00:29:05.515 [2024-07-24 20:08:53.261670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.515 [2024-07-24 20:08:53.261678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.515 qpair failed and we were unable to recover it. 00:29:05.515 [2024-07-24 20:08:53.262150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.515 [2024-07-24 20:08:53.262157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.515 qpair failed and we were unable to recover it. 00:29:05.515 [2024-07-24 20:08:53.262352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.515 [2024-07-24 20:08:53.262362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.515 qpair failed and we were unable to recover it. 00:29:05.515 [2024-07-24 20:08:53.262812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.515 [2024-07-24 20:08:53.262819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.515 qpair failed and we were unable to recover it. 00:29:05.515 [2024-07-24 20:08:53.263022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.515 [2024-07-24 20:08:53.263030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.515 qpair failed and we were unable to recover it. 00:29:05.515 [2024-07-24 20:08:53.263423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.515 [2024-07-24 20:08:53.263430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.515 qpair failed and we were unable to recover it. 00:29:05.515 [2024-07-24 20:08:53.263839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.515 [2024-07-24 20:08:53.263845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.515 qpair failed and we were unable to recover it. 00:29:05.515 [2024-07-24 20:08:53.264249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.515 [2024-07-24 20:08:53.264256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.515 qpair failed and we were unable to recover it. 00:29:05.515 [2024-07-24 20:08:53.264729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.515 [2024-07-24 20:08:53.264736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.515 qpair failed and we were unable to recover it. 00:29:05.515 [2024-07-24 20:08:53.265140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.515 [2024-07-24 20:08:53.265149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.515 qpair failed and we were unable to recover it. 00:29:05.515 [2024-07-24 20:08:53.265582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.515 [2024-07-24 20:08:53.265589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.515 qpair failed and we were unable to recover it. 00:29:05.515 [2024-07-24 20:08:53.265993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.515 [2024-07-24 20:08:53.266000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.515 qpair failed and we were unable to recover it. 00:29:05.515 [2024-07-24 20:08:53.266400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.515 [2024-07-24 20:08:53.266407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.515 qpair failed and we were unable to recover it. 00:29:05.515 [2024-07-24 20:08:53.266827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.515 [2024-07-24 20:08:53.266834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.515 qpair failed and we were unable to recover it. 00:29:05.515 [2024-07-24 20:08:53.267324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.515 [2024-07-24 20:08:53.267330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.515 qpair failed and we were unable to recover it. 00:29:05.515 [2024-07-24 20:08:53.267733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.515 [2024-07-24 20:08:53.267742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.515 qpair failed and we were unable to recover it. 00:29:05.515 [2024-07-24 20:08:53.268221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.515 [2024-07-24 20:08:53.268229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.515 qpair failed and we were unable to recover it. 00:29:05.515 [2024-07-24 20:08:53.268620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.516 [2024-07-24 20:08:53.268627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.516 qpair failed and we were unable to recover it. 00:29:05.516 [2024-07-24 20:08:53.269026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.516 [2024-07-24 20:08:53.269033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.516 qpair failed and we were unable to recover it. 00:29:05.516 [2024-07-24 20:08:53.269352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.516 [2024-07-24 20:08:53.269360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.516 qpair failed and we were unable to recover it. 00:29:05.516 [2024-07-24 20:08:53.269791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.516 [2024-07-24 20:08:53.269798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.516 qpair failed and we were unable to recover it. 00:29:05.516 [2024-07-24 20:08:53.270217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.516 [2024-07-24 20:08:53.270224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.516 qpair failed and we were unable to recover it. 00:29:05.516 [2024-07-24 20:08:53.270647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.516 [2024-07-24 20:08:53.270653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.516 qpair failed and we were unable to recover it. 00:29:05.516 [2024-07-24 20:08:53.271083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.516 [2024-07-24 20:08:53.271090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.516 qpair failed and we were unable to recover it. 00:29:05.516 [2024-07-24 20:08:53.271531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.516 [2024-07-24 20:08:53.271538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.516 qpair failed and we were unable to recover it. 00:29:05.516 [2024-07-24 20:08:53.271970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.516 [2024-07-24 20:08:53.271977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.516 qpair failed and we were unable to recover it. 00:29:05.516 [2024-07-24 20:08:53.272327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.516 [2024-07-24 20:08:53.272334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.516 qpair failed and we were unable to recover it. 00:29:05.516 [2024-07-24 20:08:53.272773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.516 [2024-07-24 20:08:53.272780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.516 qpair failed and we were unable to recover it. 00:29:05.516 [2024-07-24 20:08:53.273056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.516 [2024-07-24 20:08:53.273065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.516 qpair failed and we were unable to recover it. 00:29:05.516 [2024-07-24 20:08:53.273591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.516 [2024-07-24 20:08:53.273599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.516 qpair failed and we were unable to recover it. 00:29:05.516 [2024-07-24 20:08:53.274065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.516 [2024-07-24 20:08:53.274072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.516 qpair failed and we were unable to recover it. 00:29:05.516 [2024-07-24 20:08:53.274511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.516 [2024-07-24 20:08:53.274539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.516 qpair failed and we were unable to recover it. 00:29:05.516 [2024-07-24 20:08:53.274958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.516 [2024-07-24 20:08:53.274967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.516 qpair failed and we were unable to recover it. 00:29:05.516 [2024-07-24 20:08:53.275494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.516 [2024-07-24 20:08:53.275521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.516 qpair failed and we were unable to recover it. 00:29:05.516 [2024-07-24 20:08:53.275943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.516 [2024-07-24 20:08:53.275951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.516 qpair failed and we were unable to recover it. 00:29:05.516 [2024-07-24 20:08:53.276561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.516 [2024-07-24 20:08:53.276589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.516 qpair failed and we were unable to recover it. 00:29:05.516 [2024-07-24 20:08:53.277005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.516 [2024-07-24 20:08:53.277015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.516 qpair failed and we were unable to recover it. 00:29:05.516 [2024-07-24 20:08:53.277527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.516 [2024-07-24 20:08:53.277554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.516 qpair failed and we were unable to recover it. 00:29:05.516 [2024-07-24 20:08:53.277936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.516 [2024-07-24 20:08:53.277946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.516 qpair failed and we were unable to recover it. 00:29:05.516 [2024-07-24 20:08:53.278514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.516 [2024-07-24 20:08:53.278542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.516 qpair failed and we were unable to recover it. 00:29:05.516 [2024-07-24 20:08:53.279375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.516 [2024-07-24 20:08:53.279394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.516 qpair failed and we were unable to recover it. 00:29:05.516 [2024-07-24 20:08:53.279797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.516 [2024-07-24 20:08:53.279805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.516 qpair failed and we were unable to recover it. 00:29:05.516 [2024-07-24 20:08:53.280618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.516 [2024-07-24 20:08:53.280633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.516 qpair failed and we were unable to recover it. 00:29:05.516 [2024-07-24 20:08:53.281032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.516 [2024-07-24 20:08:53.281039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.516 qpair failed and we were unable to recover it. 00:29:05.516 [2024-07-24 20:08:53.281549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.516 [2024-07-24 20:08:53.281576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.517 qpair failed and we were unable to recover it. 00:29:05.517 [2024-07-24 20:08:53.281996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.517 [2024-07-24 20:08:53.282004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.517 qpair failed and we were unable to recover it. 00:29:05.517 [2024-07-24 20:08:53.282529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.517 [2024-07-24 20:08:53.282557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.517 qpair failed and we were unable to recover it. 00:29:05.517 [2024-07-24 20:08:53.283452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.517 [2024-07-24 20:08:53.283468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.517 qpair failed and we were unable to recover it. 00:29:05.517 [2024-07-24 20:08:53.283868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.517 [2024-07-24 20:08:53.283876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.517 qpair failed and we were unable to recover it. 00:29:05.517 [2024-07-24 20:08:53.284176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.517 [2024-07-24 20:08:53.284183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.517 qpair failed and we were unable to recover it. 00:29:05.517 [2024-07-24 20:08:53.284618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.517 [2024-07-24 20:08:53.284625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.517 qpair failed and we were unable to recover it. 00:29:05.517 [2024-07-24 20:08:53.285035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.517 [2024-07-24 20:08:53.285041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.517 qpair failed and we were unable to recover it. 00:29:05.517 [2024-07-24 20:08:53.285551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.517 [2024-07-24 20:08:53.285578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.517 qpair failed and we were unable to recover it. 00:29:05.517 [2024-07-24 20:08:53.285995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.517 [2024-07-24 20:08:53.286005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.517 qpair failed and we were unable to recover it. 00:29:05.517 [2024-07-24 20:08:53.286528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.517 [2024-07-24 20:08:53.286556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.517 qpair failed and we were unable to recover it. 00:29:05.517 [2024-07-24 20:08:53.286972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.517 [2024-07-24 20:08:53.286983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.517 qpair failed and we were unable to recover it. 00:29:05.517 [2024-07-24 20:08:53.287507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.517 [2024-07-24 20:08:53.287535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.517 qpair failed and we were unable to recover it. 00:29:05.517 [2024-07-24 20:08:53.287864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.517 [2024-07-24 20:08:53.287873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.517 qpair failed and we were unable to recover it. 00:29:05.517 [2024-07-24 20:08:53.288435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.517 [2024-07-24 20:08:53.288462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.517 qpair failed and we were unable to recover it. 00:29:05.517 [2024-07-24 20:08:53.288879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.517 [2024-07-24 20:08:53.288888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.517 qpair failed and we were unable to recover it. 00:29:05.517 [2024-07-24 20:08:53.289311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.517 [2024-07-24 20:08:53.289319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.517 qpair failed and we were unable to recover it. 00:29:05.517 [2024-07-24 20:08:53.289771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.517 [2024-07-24 20:08:53.289778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.517 qpair failed and we were unable to recover it. 00:29:05.517 [2024-07-24 20:08:53.290102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.517 [2024-07-24 20:08:53.290110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.517 qpair failed and we were unable to recover it. 00:29:05.517 [2024-07-24 20:08:53.290532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.517 [2024-07-24 20:08:53.290540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.517 qpair failed and we were unable to recover it. 00:29:05.517 [2024-07-24 20:08:53.290961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.517 [2024-07-24 20:08:53.290967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.517 qpair failed and we were unable to recover it. 00:29:05.517 [2024-07-24 20:08:53.291387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.517 [2024-07-24 20:08:53.291394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.517 qpair failed and we were unable to recover it. 00:29:05.517 [2024-07-24 20:08:53.291807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.517 [2024-07-24 20:08:53.291814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.517 qpair failed and we were unable to recover it. 00:29:05.517 [2024-07-24 20:08:53.292255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.517 [2024-07-24 20:08:53.292262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.517 qpair failed and we were unable to recover it. 00:29:05.517 [2024-07-24 20:08:53.292543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.517 [2024-07-24 20:08:53.292550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.517 qpair failed and we were unable to recover it. 00:29:05.517 [2024-07-24 20:08:53.293034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.517 [2024-07-24 20:08:53.293041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.517 qpair failed and we were unable to recover it. 00:29:05.517 [2024-07-24 20:08:53.293455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.517 [2024-07-24 20:08:53.293462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.517 qpair failed and we were unable to recover it. 00:29:05.517 [2024-07-24 20:08:53.293952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.517 [2024-07-24 20:08:53.293959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.517 qpair failed and we were unable to recover it. 00:29:05.517 [2024-07-24 20:08:53.294361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.517 [2024-07-24 20:08:53.294370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.517 qpair failed and we were unable to recover it. 00:29:05.518 [2024-07-24 20:08:53.294781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.518 [2024-07-24 20:08:53.294788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.518 qpair failed and we were unable to recover it. 00:29:05.518 [2024-07-24 20:08:53.295214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.518 [2024-07-24 20:08:53.295222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.518 qpair failed and we were unable to recover it. 00:29:05.518 [2024-07-24 20:08:53.295637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.518 [2024-07-24 20:08:53.295644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.518 qpair failed and we were unable to recover it. 00:29:05.518 [2024-07-24 20:08:53.296066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.518 [2024-07-24 20:08:53.296073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.518 qpair failed and we were unable to recover it. 00:29:05.518 [2024-07-24 20:08:53.296521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.518 [2024-07-24 20:08:53.296529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.518 qpair failed and we were unable to recover it. 00:29:05.518 [2024-07-24 20:08:53.296984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.518 [2024-07-24 20:08:53.296991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.518 qpair failed and we were unable to recover it. 00:29:05.518 [2024-07-24 20:08:53.297562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.518 [2024-07-24 20:08:53.297590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.518 qpair failed and we were unable to recover it. 00:29:05.518 [2024-07-24 20:08:53.297920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.518 [2024-07-24 20:08:53.297929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.518 qpair failed and we were unable to recover it. 00:29:05.518 [2024-07-24 20:08:53.298384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.518 [2024-07-24 20:08:53.298392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.518 qpair failed and we were unable to recover it. 00:29:05.518 [2024-07-24 20:08:53.298719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.518 [2024-07-24 20:08:53.298728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.518 qpair failed and we were unable to recover it. 00:29:05.518 [2024-07-24 20:08:53.299152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.518 [2024-07-24 20:08:53.299159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.518 qpair failed and we were unable to recover it. 00:29:05.518 [2024-07-24 20:08:53.299577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.518 [2024-07-24 20:08:53.299584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.518 qpair failed and we were unable to recover it. 00:29:05.518 [2024-07-24 20:08:53.299998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.518 [2024-07-24 20:08:53.300005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.518 qpair failed and we were unable to recover it. 00:29:05.518 [2024-07-24 20:08:53.300537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.518 [2024-07-24 20:08:53.300565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.518 qpair failed and we were unable to recover it. 00:29:05.518 [2024-07-24 20:08:53.301022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.518 [2024-07-24 20:08:53.301031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.518 qpair failed and we were unable to recover it. 00:29:05.518 [2024-07-24 20:08:53.301565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.518 [2024-07-24 20:08:53.301593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.518 qpair failed and we were unable to recover it. 00:29:05.518 [2024-07-24 20:08:53.302023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.518 [2024-07-24 20:08:53.302032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.518 qpair failed and we were unable to recover it. 00:29:05.518 [2024-07-24 20:08:53.302589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.518 [2024-07-24 20:08:53.302617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.518 qpair failed and we were unable to recover it. 00:29:05.518 [2024-07-24 20:08:53.302945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.518 [2024-07-24 20:08:53.302954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.518 qpair failed and we were unable to recover it. 00:29:05.518 [2024-07-24 20:08:53.303426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.518 [2024-07-24 20:08:53.303453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.518 qpair failed and we were unable to recover it. 00:29:05.518 [2024-07-24 20:08:53.303770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.518 [2024-07-24 20:08:53.303779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.519 qpair failed and we were unable to recover it. 00:29:05.519 [2024-07-24 20:08:53.304209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.519 [2024-07-24 20:08:53.304217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.519 qpair failed and we were unable to recover it. 00:29:05.519 [2024-07-24 20:08:53.304683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.519 [2024-07-24 20:08:53.304694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.519 qpair failed and we were unable to recover it. 00:29:05.519 [2024-07-24 20:08:53.305095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.519 [2024-07-24 20:08:53.305102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.519 qpair failed and we were unable to recover it. 00:29:05.519 [2024-07-24 20:08:53.305419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.519 [2024-07-24 20:08:53.305426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.519 qpair failed and we were unable to recover it. 00:29:05.519 [2024-07-24 20:08:53.305915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.519 [2024-07-24 20:08:53.305921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.519 qpair failed and we were unable to recover it. 00:29:05.519 [2024-07-24 20:08:53.306384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.519 [2024-07-24 20:08:53.306412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.519 qpair failed and we were unable to recover it. 00:29:05.519 [2024-07-24 20:08:53.306835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.519 [2024-07-24 20:08:53.306843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.519 qpair failed and we were unable to recover it. 00:29:05.519 [2024-07-24 20:08:53.307248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.519 [2024-07-24 20:08:53.307256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.519 qpair failed and we were unable to recover it. 00:29:05.519 [2024-07-24 20:08:53.307451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.519 [2024-07-24 20:08:53.307460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.519 qpair failed and we were unable to recover it. 00:29:05.519 [2024-07-24 20:08:53.307945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.519 [2024-07-24 20:08:53.307952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.519 qpair failed and we were unable to recover it. 00:29:05.519 [2024-07-24 20:08:53.308163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.519 [2024-07-24 20:08:53.308173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.519 qpair failed and we were unable to recover it. 00:29:05.519 [2024-07-24 20:08:53.308502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.519 [2024-07-24 20:08:53.308510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.519 qpair failed and we were unable to recover it. 00:29:05.519 [2024-07-24 20:08:53.308956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.519 [2024-07-24 20:08:53.308962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.519 qpair failed and we were unable to recover it. 00:29:05.519 [2024-07-24 20:08:53.309290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.519 [2024-07-24 20:08:53.309298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.519 qpair failed and we were unable to recover it. 00:29:05.519 [2024-07-24 20:08:53.309724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.519 [2024-07-24 20:08:53.309730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.519 qpair failed and we were unable to recover it. 00:29:05.519 [2024-07-24 20:08:53.310212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.519 [2024-07-24 20:08:53.310220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.519 qpair failed and we were unable to recover it. 00:29:05.519 [2024-07-24 20:08:53.310521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.519 [2024-07-24 20:08:53.310528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.519 qpair failed and we were unable to recover it. 00:29:05.519 [2024-07-24 20:08:53.310946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.519 [2024-07-24 20:08:53.310953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.519 qpair failed and we were unable to recover it. 00:29:05.519 [2024-07-24 20:08:53.311150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.520 [2024-07-24 20:08:53.311158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.520 qpair failed and we were unable to recover it. 00:29:05.520 [2024-07-24 20:08:53.311595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.520 [2024-07-24 20:08:53.311603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.520 qpair failed and we were unable to recover it. 00:29:05.520 [2024-07-24 20:08:53.312005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.520 [2024-07-24 20:08:53.312012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.520 qpair failed and we were unable to recover it. 00:29:05.520 [2024-07-24 20:08:53.312429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.520 [2024-07-24 20:08:53.312436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.520 qpair failed and we were unable to recover it. 00:29:05.520 [2024-07-24 20:08:53.312869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.520 [2024-07-24 20:08:53.312876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.520 qpair failed and we were unable to recover it. 00:29:05.520 [2024-07-24 20:08:53.313303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.520 [2024-07-24 20:08:53.313310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.520 qpair failed and we were unable to recover it. 00:29:05.520 [2024-07-24 20:08:53.313746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.520 [2024-07-24 20:08:53.313753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.520 qpair failed and we were unable to recover it. 00:29:05.520 [2024-07-24 20:08:53.314180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.520 [2024-07-24 20:08:53.314187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.520 qpair failed and we were unable to recover it. 00:29:05.520 [2024-07-24 20:08:53.314493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.520 [2024-07-24 20:08:53.314500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.520 qpair failed and we were unable to recover it. 00:29:05.520 [2024-07-24 20:08:53.314849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.520 [2024-07-24 20:08:53.314857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.520 qpair failed and we were unable to recover it. 00:29:05.520 [2024-07-24 20:08:53.315043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.520 [2024-07-24 20:08:53.315051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.520 qpair failed and we were unable to recover it. 00:29:05.520 [2024-07-24 20:08:53.315456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.520 [2024-07-24 20:08:53.315463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.520 qpair failed and we were unable to recover it. 00:29:05.520 [2024-07-24 20:08:53.315865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.520 [2024-07-24 20:08:53.315871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.520 qpair failed and we were unable to recover it. 00:29:05.520 [2024-07-24 20:08:53.316278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.520 [2024-07-24 20:08:53.316285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.520 qpair failed and we were unable to recover it. 00:29:05.520 [2024-07-24 20:08:53.316767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.520 [2024-07-24 20:08:53.316774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.520 qpair failed and we were unable to recover it. 00:29:05.520 [2024-07-24 20:08:53.317182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.520 [2024-07-24 20:08:53.317189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.520 qpair failed and we were unable to recover it. 00:29:05.520 [2024-07-24 20:08:53.317607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.520 [2024-07-24 20:08:53.317614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.520 qpair failed and we were unable to recover it. 00:29:05.520 [2024-07-24 20:08:53.317939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.520 [2024-07-24 20:08:53.317946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.520 qpair failed and we were unable to recover it. 00:29:05.520 [2024-07-24 20:08:53.318467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.520 [2024-07-24 20:08:53.318494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.520 qpair failed and we were unable to recover it. 00:29:05.520 [2024-07-24 20:08:53.318846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.520 [2024-07-24 20:08:53.318854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.520 qpair failed and we were unable to recover it. 00:29:05.520 [2024-07-24 20:08:53.319178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.520 [2024-07-24 20:08:53.319185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.520 qpair failed and we were unable to recover it. 00:29:05.520 [2024-07-24 20:08:53.319591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.520 [2024-07-24 20:08:53.319599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.520 qpair failed and we were unable to recover it. 00:29:05.520 [2024-07-24 20:08:53.319909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.520 [2024-07-24 20:08:53.319917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.520 qpair failed and we were unable to recover it. 00:29:05.520 [2024-07-24 20:08:53.320329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.520 [2024-07-24 20:08:53.320339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.520 qpair failed and we were unable to recover it. 00:29:05.520 [2024-07-24 20:08:53.320810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.520 [2024-07-24 20:08:53.320817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.520 qpair failed and we were unable to recover it. 00:29:05.520 [2024-07-24 20:08:53.321210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.520 [2024-07-24 20:08:53.321218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.520 qpair failed and we were unable to recover it. 00:29:05.520 [2024-07-24 20:08:53.321647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.520 [2024-07-24 20:08:53.321654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.520 qpair failed and we were unable to recover it. 00:29:05.521 [2024-07-24 20:08:53.322079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.521 [2024-07-24 20:08:53.322085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.521 qpair failed and we were unable to recover it. 00:29:05.521 [2024-07-24 20:08:53.322503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.521 [2024-07-24 20:08:53.322510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.521 qpair failed and we were unable to recover it. 00:29:05.521 [2024-07-24 20:08:53.322937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.521 [2024-07-24 20:08:53.322944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.521 qpair failed and we were unable to recover it. 00:29:05.521 [2024-07-24 20:08:53.323376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.521 [2024-07-24 20:08:53.323383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.521 qpair failed and we were unable to recover it. 00:29:05.521 [2024-07-24 20:08:53.323833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.521 [2024-07-24 20:08:53.323841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.521 qpair failed and we were unable to recover it. 00:29:05.521 [2024-07-24 20:08:53.324273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.521 [2024-07-24 20:08:53.324281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.521 qpair failed and we were unable to recover it. 00:29:05.521 [2024-07-24 20:08:53.324702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.521 [2024-07-24 20:08:53.324709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.521 qpair failed and we were unable to recover it. 00:29:05.521 [2024-07-24 20:08:53.325129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.521 [2024-07-24 20:08:53.325137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.521 qpair failed and we were unable to recover it. 00:29:05.521 [2024-07-24 20:08:53.325447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.521 [2024-07-24 20:08:53.325461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.521 qpair failed and we were unable to recover it. 00:29:05.521 [2024-07-24 20:08:53.325885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.521 [2024-07-24 20:08:53.325891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.521 qpair failed and we were unable to recover it. 00:29:05.521 [2024-07-24 20:08:53.326091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.521 [2024-07-24 20:08:53.326101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.521 qpair failed and we were unable to recover it. 00:29:05.521 [2024-07-24 20:08:53.326563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.521 [2024-07-24 20:08:53.326570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.521 qpair failed and we were unable to recover it. 00:29:05.521 [2024-07-24 20:08:53.326938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.521 [2024-07-24 20:08:53.326945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.521 qpair failed and we were unable to recover it. 00:29:05.521 [2024-07-24 20:08:53.327382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.521 [2024-07-24 20:08:53.327389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.521 qpair failed and we were unable to recover it. 00:29:05.521 [2024-07-24 20:08:53.327840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.521 [2024-07-24 20:08:53.327846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.521 qpair failed and we were unable to recover it. 00:29:05.521 [2024-07-24 20:08:53.328238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.521 [2024-07-24 20:08:53.328245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.521 qpair failed and we were unable to recover it. 00:29:05.521 [2024-07-24 20:08:53.328683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.521 [2024-07-24 20:08:53.328690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.521 qpair failed and we were unable to recover it. 00:29:05.521 [2024-07-24 20:08:53.329139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.521 [2024-07-24 20:08:53.329145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.521 qpair failed and we were unable to recover it. 00:29:05.521 [2024-07-24 20:08:53.329639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.521 [2024-07-24 20:08:53.329648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.521 qpair failed and we were unable to recover it. 00:29:05.521 [2024-07-24 20:08:53.330080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.521 [2024-07-24 20:08:53.330088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.521 qpair failed and we were unable to recover it. 00:29:05.521 [2024-07-24 20:08:53.330378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.521 [2024-07-24 20:08:53.330385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.521 qpair failed and we were unable to recover it. 00:29:05.521 [2024-07-24 20:08:53.330798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.521 [2024-07-24 20:08:53.330804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.521 qpair failed and we were unable to recover it. 00:29:05.521 [2024-07-24 20:08:53.331126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.521 [2024-07-24 20:08:53.331133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.521 qpair failed and we were unable to recover it. 00:29:05.521 [2024-07-24 20:08:53.331388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.521 [2024-07-24 20:08:53.331396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.521 qpair failed and we were unable to recover it. 00:29:05.521 [2024-07-24 20:08:53.331818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.521 [2024-07-24 20:08:53.331824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.521 qpair failed and we were unable to recover it. 00:29:05.521 [2024-07-24 20:08:53.332152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.521 [2024-07-24 20:08:53.332159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.521 qpair failed and we were unable to recover it. 00:29:05.521 [2024-07-24 20:08:53.332585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.521 [2024-07-24 20:08:53.332592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.521 qpair failed and we were unable to recover it. 00:29:05.522 [2024-07-24 20:08:53.333028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.522 [2024-07-24 20:08:53.333035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.522 qpair failed and we were unable to recover it. 00:29:05.522 [2024-07-24 20:08:53.333453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.522 [2024-07-24 20:08:53.333461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.522 qpair failed and we were unable to recover it. 00:29:05.522 [2024-07-24 20:08:53.333868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.522 [2024-07-24 20:08:53.333875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.522 qpair failed and we were unable to recover it. 00:29:05.522 [2024-07-24 20:08:53.334403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.522 [2024-07-24 20:08:53.334431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.522 qpair failed and we were unable to recover it. 00:29:05.522 [2024-07-24 20:08:53.334862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.522 [2024-07-24 20:08:53.334871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.522 qpair failed and we were unable to recover it. 00:29:05.522 [2024-07-24 20:08:53.335179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.522 [2024-07-24 20:08:53.335187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.522 qpair failed and we were unable to recover it. 00:29:05.522 [2024-07-24 20:08:53.335625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.522 [2024-07-24 20:08:53.335633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.522 qpair failed and we were unable to recover it. 00:29:05.522 [2024-07-24 20:08:53.335959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.522 [2024-07-24 20:08:53.335966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.522 qpair failed and we were unable to recover it. 00:29:05.522 [2024-07-24 20:08:53.336412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.522 [2024-07-24 20:08:53.336439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.522 qpair failed and we were unable to recover it. 00:29:05.522 [2024-07-24 20:08:53.336866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.522 [2024-07-24 20:08:53.336878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.522 qpair failed and we were unable to recover it. 00:29:05.522 [2024-07-24 20:08:53.337316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.522 [2024-07-24 20:08:53.337324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.522 qpair failed and we were unable to recover it. 00:29:05.522 [2024-07-24 20:08:53.337725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.522 [2024-07-24 20:08:53.337733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.522 qpair failed and we were unable to recover it. 00:29:05.522 [2024-07-24 20:08:53.338161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.522 [2024-07-24 20:08:53.338169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.522 qpair failed and we were unable to recover it. 00:29:05.522 [2024-07-24 20:08:53.339194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.522 [2024-07-24 20:08:53.339215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.522 qpair failed and we were unable to recover it. 00:29:05.522 [2024-07-24 20:08:53.339641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.522 [2024-07-24 20:08:53.339649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.522 qpair failed and we were unable to recover it. 00:29:05.522 [2024-07-24 20:08:53.340053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.522 [2024-07-24 20:08:53.340060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.522 qpair failed and we were unable to recover it. 00:29:05.522 [2024-07-24 20:08:53.340485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.522 [2024-07-24 20:08:53.340493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.522 qpair failed and we were unable to recover it. 00:29:05.522 [2024-07-24 20:08:53.340903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.522 [2024-07-24 20:08:53.340909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.522 qpair failed and we were unable to recover it. 00:29:05.522 [2024-07-24 20:08:53.341433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.522 [2024-07-24 20:08:53.341461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.522 qpair failed and we were unable to recover it. 00:29:05.522 [2024-07-24 20:08:53.341878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.522 [2024-07-24 20:08:53.341887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.522 qpair failed and we were unable to recover it. 00:29:05.522 [2024-07-24 20:08:53.342313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.522 [2024-07-24 20:08:53.342320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.522 qpair failed and we were unable to recover it. 00:29:05.522 [2024-07-24 20:08:53.342774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.522 [2024-07-24 20:08:53.342782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.522 qpair failed and we were unable to recover it. 00:29:05.522 [2024-07-24 20:08:53.343206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.522 [2024-07-24 20:08:53.343213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.522 qpair failed and we were unable to recover it. 00:29:05.522 [2024-07-24 20:08:53.343616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.522 [2024-07-24 20:08:53.343623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.522 qpair failed and we were unable to recover it. 00:29:05.522 [2024-07-24 20:08:53.344028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.522 [2024-07-24 20:08:53.344035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.522 qpair failed and we were unable to recover it. 00:29:05.522 [2024-07-24 20:08:53.344557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.522 [2024-07-24 20:08:53.344585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.522 qpair failed and we were unable to recover it. 00:29:05.522 [2024-07-24 20:08:53.345002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.522 [2024-07-24 20:08:53.345010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.522 qpair failed and we were unable to recover it. 00:29:05.523 [2024-07-24 20:08:53.345537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.523 [2024-07-24 20:08:53.345565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.523 qpair failed and we were unable to recover it. 00:29:05.523 [2024-07-24 20:08:53.345981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.523 [2024-07-24 20:08:53.345989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.523 qpair failed and we were unable to recover it. 00:29:05.523 [2024-07-24 20:08:53.346571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.523 [2024-07-24 20:08:53.346598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.523 qpair failed and we were unable to recover it. 00:29:05.523 [2024-07-24 20:08:53.347013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.523 [2024-07-24 20:08:53.347021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.523 qpair failed and we were unable to recover it. 00:29:05.523 [2024-07-24 20:08:53.347443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.523 [2024-07-24 20:08:53.347450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.523 qpair failed and we were unable to recover it. 00:29:05.523 [2024-07-24 20:08:53.347879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.523 [2024-07-24 20:08:53.347887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.523 qpair failed and we were unable to recover it. 00:29:05.523 [2024-07-24 20:08:53.348394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.523 [2024-07-24 20:08:53.348421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.523 qpair failed and we were unable to recover it. 00:29:05.523 [2024-07-24 20:08:53.348755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.523 [2024-07-24 20:08:53.348764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.523 qpair failed and we were unable to recover it. 00:29:05.523 [2024-07-24 20:08:53.349189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.523 [2024-07-24 20:08:53.349196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.523 qpair failed and we were unable to recover it. 00:29:05.523 [2024-07-24 20:08:53.349529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.523 [2024-07-24 20:08:53.349538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.523 qpair failed and we were unable to recover it. 00:29:05.523 [2024-07-24 20:08:53.349998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.523 [2024-07-24 20:08:53.350006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.523 qpair failed and we were unable to recover it. 00:29:05.523 [2024-07-24 20:08:53.350445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.523 [2024-07-24 20:08:53.350478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.523 qpair failed and we were unable to recover it. 00:29:05.523 [2024-07-24 20:08:53.350901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.523 [2024-07-24 20:08:53.350910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.523 qpair failed and we were unable to recover it. 00:29:05.523 [2024-07-24 20:08:53.351409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.523 [2024-07-24 20:08:53.351437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.523 qpair failed and we were unable to recover it. 00:29:05.523 [2024-07-24 20:08:53.351882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.523 [2024-07-24 20:08:53.351891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.523 qpair failed and we were unable to recover it. 00:29:05.523 [2024-07-24 20:08:53.352216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.523 [2024-07-24 20:08:53.352224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.523 qpair failed and we were unable to recover it. 00:29:05.523 [2024-07-24 20:08:53.352658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.523 [2024-07-24 20:08:53.352666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.523 qpair failed and we were unable to recover it. 00:29:05.523 [2024-07-24 20:08:53.353065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.523 [2024-07-24 20:08:53.353072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.523 qpair failed and we were unable to recover it. 00:29:05.523 [2024-07-24 20:08:53.353515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.523 [2024-07-24 20:08:53.353522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.523 qpair failed and we were unable to recover it. 00:29:05.523 [2024-07-24 20:08:53.353851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.523 [2024-07-24 20:08:53.353857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.523 qpair failed and we were unable to recover it. 00:29:05.523 [2024-07-24 20:08:53.354079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.523 [2024-07-24 20:08:53.354086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.523 qpair failed and we were unable to recover it. 00:29:05.523 [2024-07-24 20:08:53.354560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.523 [2024-07-24 20:08:53.354568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.523 qpair failed and we were unable to recover it. 00:29:05.523 [2024-07-24 20:08:53.354975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.523 [2024-07-24 20:08:53.354985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.523 qpair failed and we were unable to recover it. 00:29:05.523 [2024-07-24 20:08:53.355262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.523 [2024-07-24 20:08:53.355270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.523 qpair failed and we were unable to recover it. 00:29:05.523 [2024-07-24 20:08:53.355471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.523 [2024-07-24 20:08:53.355478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.524 qpair failed and we were unable to recover it. 00:29:05.524 [2024-07-24 20:08:53.355907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.524 [2024-07-24 20:08:53.355913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.524 qpair failed and we were unable to recover it. 00:29:05.524 [2024-07-24 20:08:53.356357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.524 [2024-07-24 20:08:53.356365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.524 qpair failed and we were unable to recover it. 00:29:05.524 [2024-07-24 20:08:53.356788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.524 [2024-07-24 20:08:53.356796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.524 qpair failed and we were unable to recover it. 00:29:05.524 [2024-07-24 20:08:53.357131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.524 [2024-07-24 20:08:53.357138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.524 qpair failed and we were unable to recover it. 00:29:05.524 [2024-07-24 20:08:53.357589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.524 [2024-07-24 20:08:53.357596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.524 qpair failed and we were unable to recover it. 00:29:05.524 [2024-07-24 20:08:53.358057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.524 [2024-07-24 20:08:53.358063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.524 qpair failed and we were unable to recover it. 00:29:05.524 [2024-07-24 20:08:53.358558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.524 [2024-07-24 20:08:53.358585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.524 qpair failed and we were unable to recover it. 00:29:05.524 [2024-07-24 20:08:53.359002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.524 [2024-07-24 20:08:53.359011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.524 qpair failed and we were unable to recover it. 00:29:05.524 [2024-07-24 20:08:53.359224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.524 [2024-07-24 20:08:53.359235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.524 qpair failed and we were unable to recover it. 00:29:05.524 [2024-07-24 20:08:53.359623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.524 [2024-07-24 20:08:53.359630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.524 qpair failed and we were unable to recover it. 00:29:05.524 [2024-07-24 20:08:53.360035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.524 [2024-07-24 20:08:53.360042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.524 qpair failed and we were unable to recover it. 00:29:05.524 [2024-07-24 20:08:53.360342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.524 [2024-07-24 20:08:53.360350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.524 qpair failed and we were unable to recover it. 00:29:05.524 [2024-07-24 20:08:53.360747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.524 [2024-07-24 20:08:53.360754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.524 qpair failed and we were unable to recover it. 00:29:05.524 [2024-07-24 20:08:53.361191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.524 [2024-07-24 20:08:53.361199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.524 qpair failed and we were unable to recover it. 00:29:05.524 [2024-07-24 20:08:53.361603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.524 [2024-07-24 20:08:53.361610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.524 qpair failed and we were unable to recover it. 00:29:05.524 [2024-07-24 20:08:53.362023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.524 [2024-07-24 20:08:53.362029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.524 qpair failed and we were unable to recover it. 00:29:05.524 [2024-07-24 20:08:53.362418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.524 [2024-07-24 20:08:53.362446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.524 qpair failed and we were unable to recover it. 00:29:05.524 [2024-07-24 20:08:53.362877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.524 [2024-07-24 20:08:53.362886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.524 qpair failed and we were unable to recover it. 00:29:05.524 [2024-07-24 20:08:53.363480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.524 [2024-07-24 20:08:53.363509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.524 qpair failed and we were unable to recover it. 00:29:05.524 [2024-07-24 20:08:53.363937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.524 [2024-07-24 20:08:53.363946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.524 qpair failed and we were unable to recover it. 00:29:05.524 [2024-07-24 20:08:53.364466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.524 [2024-07-24 20:08:53.364493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.524 qpair failed and we were unable to recover it. 00:29:05.524 [2024-07-24 20:08:53.364911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.524 [2024-07-24 20:08:53.364920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.524 qpair failed and we were unable to recover it. 00:29:05.524 [2024-07-24 20:08:53.365403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.524 [2024-07-24 20:08:53.365410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.524 qpair failed and we were unable to recover it. 00:29:05.524 [2024-07-24 20:08:53.365812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.524 [2024-07-24 20:08:53.365819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.524 qpair failed and we were unable to recover it. 00:29:05.524 [2024-07-24 20:08:53.366225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.524 [2024-07-24 20:08:53.366233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.524 qpair failed and we were unable to recover it. 00:29:05.524 [2024-07-24 20:08:53.366658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.524 [2024-07-24 20:08:53.366664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.524 qpair failed and we were unable to recover it. 00:29:05.525 [2024-07-24 20:08:53.367075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.525 [2024-07-24 20:08:53.367081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.525 qpair failed and we were unable to recover it. 00:29:05.525 [2024-07-24 20:08:53.367384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.525 [2024-07-24 20:08:53.367392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.525 qpair failed and we were unable to recover it. 00:29:05.525 [2024-07-24 20:08:53.367771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.525 [2024-07-24 20:08:53.367779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.525 qpair failed and we were unable to recover it. 00:29:05.525 [2024-07-24 20:08:53.368184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.525 [2024-07-24 20:08:53.368192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.525 qpair failed and we were unable to recover it. 00:29:05.525 [2024-07-24 20:08:53.368625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.525 [2024-07-24 20:08:53.368633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.525 qpair failed and we were unable to recover it. 00:29:05.525 [2024-07-24 20:08:53.368925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.525 [2024-07-24 20:08:53.368932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.525 qpair failed and we were unable to recover it. 00:29:05.525 [2024-07-24 20:08:53.369473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.525 [2024-07-24 20:08:53.369501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.525 qpair failed and we were unable to recover it. 00:29:05.525 [2024-07-24 20:08:53.369954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.525 [2024-07-24 20:08:53.369963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.525 qpair failed and we were unable to recover it. 00:29:05.525 [2024-07-24 20:08:53.370478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.525 [2024-07-24 20:08:53.370506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.525 qpair failed and we were unable to recover it. 00:29:05.525 [2024-07-24 20:08:53.370918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.525 [2024-07-24 20:08:53.370927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.525 qpair failed and we were unable to recover it. 00:29:05.525 [2024-07-24 20:08:53.371451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.525 [2024-07-24 20:08:53.371478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.525 qpair failed and we were unable to recover it. 00:29:05.525 [2024-07-24 20:08:53.371882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.525 [2024-07-24 20:08:53.371895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.525 qpair failed and we were unable to recover it. 00:29:05.525 [2024-07-24 20:08:53.372317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.525 [2024-07-24 20:08:53.372324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.525 qpair failed and we were unable to recover it. 00:29:05.525 [2024-07-24 20:08:53.372767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.525 [2024-07-24 20:08:53.372774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.525 qpair failed and we were unable to recover it. 00:29:05.525 [2024-07-24 20:08:53.373197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.525 [2024-07-24 20:08:53.373209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.525 qpair failed and we were unable to recover it. 00:29:05.525 [2024-07-24 20:08:53.373625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.525 [2024-07-24 20:08:53.373632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.525 qpair failed and we were unable to recover it. 00:29:05.525 [2024-07-24 20:08:53.374082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.525 [2024-07-24 20:08:53.374089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.525 qpair failed and we were unable to recover it. 00:29:05.525 [2024-07-24 20:08:53.374537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.525 [2024-07-24 20:08:53.374544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.525 qpair failed and we were unable to recover it. 00:29:05.525 [2024-07-24 20:08:53.374948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.526 [2024-07-24 20:08:53.374955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.526 qpair failed and we were unable to recover it. 00:29:05.526 [2024-07-24 20:08:53.375469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.526 [2024-07-24 20:08:53.375496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.526 qpair failed and we were unable to recover it. 00:29:05.526 [2024-07-24 20:08:53.375915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.526 [2024-07-24 20:08:53.375923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.526 qpair failed and we were unable to recover it. 00:29:05.526 [2024-07-24 20:08:53.376434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.526 [2024-07-24 20:08:53.376462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.526 qpair failed and we were unable to recover it. 00:29:05.526 [2024-07-24 20:08:53.376881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.526 [2024-07-24 20:08:53.376889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.526 qpair failed and we were unable to recover it. 00:29:05.526 [2024-07-24 20:08:53.377296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.526 [2024-07-24 20:08:53.377303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.526 qpair failed and we were unable to recover it. 00:29:05.526 [2024-07-24 20:08:53.377713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.526 [2024-07-24 20:08:53.377721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.526 qpair failed and we were unable to recover it. 00:29:05.526 [2024-07-24 20:08:53.378208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.526 [2024-07-24 20:08:53.378218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.526 qpair failed and we were unable to recover it. 00:29:05.526 [2024-07-24 20:08:53.378413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.526 [2024-07-24 20:08:53.378425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.526 qpair failed and we were unable to recover it. 00:29:05.526 [2024-07-24 20:08:53.378851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.526 [2024-07-24 20:08:53.378859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.526 qpair failed and we were unable to recover it. 00:29:05.526 [2024-07-24 20:08:53.379277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.526 [2024-07-24 20:08:53.379284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.526 qpair failed and we were unable to recover it. 00:29:05.526 [2024-07-24 20:08:53.379690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.526 [2024-07-24 20:08:53.379697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.526 qpair failed and we were unable to recover it. 00:29:05.526 [2024-07-24 20:08:53.380095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.526 [2024-07-24 20:08:53.380102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.526 qpair failed and we were unable to recover it. 00:29:05.526 [2024-07-24 20:08:53.380417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.526 [2024-07-24 20:08:53.380424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.526 qpair failed and we were unable to recover it. 00:29:05.526 [2024-07-24 20:08:53.380849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.526 [2024-07-24 20:08:53.380855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.526 qpair failed and we were unable to recover it. 00:29:05.526 [2024-07-24 20:08:53.381254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.526 [2024-07-24 20:08:53.381261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.526 qpair failed and we were unable to recover it. 00:29:05.526 [2024-07-24 20:08:53.381593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.526 [2024-07-24 20:08:53.381600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.526 qpair failed and we were unable to recover it. 00:29:05.526 [2024-07-24 20:08:53.382035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.526 [2024-07-24 20:08:53.382043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.526 qpair failed and we were unable to recover it. 00:29:05.526 [2024-07-24 20:08:53.382355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.526 [2024-07-24 20:08:53.382362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.526 qpair failed and we were unable to recover it. 00:29:05.526 [2024-07-24 20:08:53.382700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.526 [2024-07-24 20:08:53.382708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.527 qpair failed and we were unable to recover it. 00:29:05.527 [2024-07-24 20:08:53.383178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.527 [2024-07-24 20:08:53.383184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.527 qpair failed and we were unable to recover it. 00:29:05.527 [2024-07-24 20:08:53.383601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.527 [2024-07-24 20:08:53.383608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.527 qpair failed and we were unable to recover it. 00:29:05.527 [2024-07-24 20:08:53.383939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.527 [2024-07-24 20:08:53.383946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.527 qpair failed and we were unable to recover it. 00:29:05.527 [2024-07-24 20:08:53.384362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.527 [2024-07-24 20:08:53.384369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.527 qpair failed and we were unable to recover it. 00:29:05.527 [2024-07-24 20:08:53.384793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.527 [2024-07-24 20:08:53.384800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.527 qpair failed and we were unable to recover it. 00:29:05.527 [2024-07-24 20:08:53.385239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.527 [2024-07-24 20:08:53.385246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.527 qpair failed and we were unable to recover it. 00:29:05.527 [2024-07-24 20:08:53.385654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.527 [2024-07-24 20:08:53.385661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.527 qpair failed and we were unable to recover it. 00:29:05.527 [2024-07-24 20:08:53.386102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.527 [2024-07-24 20:08:53.386109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.527 qpair failed and we were unable to recover it. 00:29:05.527 [2024-07-24 20:08:53.386554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.527 [2024-07-24 20:08:53.386562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.527 qpair failed and we were unable to recover it. 00:29:05.527 [2024-07-24 20:08:53.386992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.527 [2024-07-24 20:08:53.386999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.527 qpair failed and we were unable to recover it. 00:29:05.527 [2024-07-24 20:08:53.387425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.527 [2024-07-24 20:08:53.387432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.527 qpair failed and we were unable to recover it. 00:29:05.527 [2024-07-24 20:08:53.387830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.527 [2024-07-24 20:08:53.387837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.527 qpair failed and we were unable to recover it. 00:29:05.527 [2024-07-24 20:08:53.388236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.527 [2024-07-24 20:08:53.388243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.527 qpair failed and we were unable to recover it. 00:29:05.527 [2024-07-24 20:08:53.388649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.527 [2024-07-24 20:08:53.388658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.527 qpair failed and we were unable to recover it. 00:29:05.527 [2024-07-24 20:08:53.389067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.527 [2024-07-24 20:08:53.389074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.527 qpair failed and we were unable to recover it. 00:29:05.527 [2024-07-24 20:08:53.389545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.527 [2024-07-24 20:08:53.389552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.527 qpair failed and we were unable to recover it. 00:29:05.527 [2024-07-24 20:08:53.389952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.527 [2024-07-24 20:08:53.389959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.527 qpair failed and we were unable to recover it. 00:29:05.527 [2024-07-24 20:08:53.390492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.527 [2024-07-24 20:08:53.390521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.527 qpair failed and we were unable to recover it. 00:29:05.527 [2024-07-24 20:08:53.390954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.527 [2024-07-24 20:08:53.390963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.527 qpair failed and we were unable to recover it. 00:29:05.527 [2024-07-24 20:08:53.391501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.527 [2024-07-24 20:08:53.391529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.527 qpair failed and we were unable to recover it. 00:29:05.527 [2024-07-24 20:08:53.391949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.527 [2024-07-24 20:08:53.391957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.527 qpair failed and we were unable to recover it. 00:29:05.527 [2024-07-24 20:08:53.392484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.527 [2024-07-24 20:08:53.392512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.527 qpair failed and we were unable to recover it. 00:29:05.527 [2024-07-24 20:08:53.392920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.527 [2024-07-24 20:08:53.392928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.527 qpair failed and we were unable to recover it. 00:29:05.527 [2024-07-24 20:08:53.393468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.527 [2024-07-24 20:08:53.393496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.527 qpair failed and we were unable to recover it. 00:29:05.527 [2024-07-24 20:08:53.393913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.527 [2024-07-24 20:08:53.393921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.527 qpair failed and we were unable to recover it. 00:29:05.527 [2024-07-24 20:08:53.394325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.527 [2024-07-24 20:08:53.394332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.528 qpair failed and we were unable to recover it. 00:29:05.528 [2024-07-24 20:08:53.394756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.528 [2024-07-24 20:08:53.394762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.528 qpair failed and we were unable to recover it. 00:29:05.528 [2024-07-24 20:08:53.395167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.528 [2024-07-24 20:08:53.395175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.528 qpair failed and we were unable to recover it. 00:29:05.528 [2024-07-24 20:08:53.395670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.528 [2024-07-24 20:08:53.395679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.528 qpair failed and we were unable to recover it. 00:29:05.528 [2024-07-24 20:08:53.395999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.528 [2024-07-24 20:08:53.396005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.528 qpair failed and we were unable to recover it. 00:29:05.528 [2024-07-24 20:08:53.396496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.528 [2024-07-24 20:08:53.396523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.528 qpair failed and we were unable to recover it. 00:29:05.528 [2024-07-24 20:08:53.396948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.528 [2024-07-24 20:08:53.396957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.528 qpair failed and we were unable to recover it. 00:29:05.528 [2024-07-24 20:08:53.397478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.528 [2024-07-24 20:08:53.397506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.528 qpair failed and we were unable to recover it. 00:29:05.528 [2024-07-24 20:08:53.397924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.528 [2024-07-24 20:08:53.397932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.528 qpair failed and we were unable to recover it. 00:29:05.528 [2024-07-24 20:08:53.398461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.528 [2024-07-24 20:08:53.398489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.528 qpair failed and we were unable to recover it. 00:29:05.528 [2024-07-24 20:08:53.398903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.528 [2024-07-24 20:08:53.398913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.528 qpair failed and we were unable to recover it. 00:29:05.528 [2024-07-24 20:08:53.399340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.528 [2024-07-24 20:08:53.399348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.528 qpair failed and we were unable to recover it. 00:29:05.528 [2024-07-24 20:08:53.399774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.528 [2024-07-24 20:08:53.399781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.528 qpair failed and we were unable to recover it. 00:29:05.528 [2024-07-24 20:08:53.400208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.528 [2024-07-24 20:08:53.400215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.528 qpair failed and we were unable to recover it. 00:29:05.528 [2024-07-24 20:08:53.400616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.528 [2024-07-24 20:08:53.400623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.528 qpair failed and we were unable to recover it. 00:29:05.528 [2024-07-24 20:08:53.401024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.528 [2024-07-24 20:08:53.401032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.528 qpair failed and we were unable to recover it. 00:29:05.528 [2024-07-24 20:08:53.401569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.528 [2024-07-24 20:08:53.401597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.528 qpair failed and we were unable to recover it. 00:29:05.528 [2024-07-24 20:08:53.402071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.528 [2024-07-24 20:08:53.402080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.528 qpair failed and we were unable to recover it. 00:29:05.528 [2024-07-24 20:08:53.402579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.528 [2024-07-24 20:08:53.402607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.528 qpair failed and we were unable to recover it. 00:29:05.528 [2024-07-24 20:08:53.403024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.528 [2024-07-24 20:08:53.403033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.528 qpair failed and we were unable to recover it. 00:29:05.528 [2024-07-24 20:08:53.403553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.528 [2024-07-24 20:08:53.403581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.528 qpair failed and we were unable to recover it. 00:29:05.528 [2024-07-24 20:08:53.403900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.528 [2024-07-24 20:08:53.403908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.528 qpair failed and we were unable to recover it. 00:29:05.528 [2024-07-24 20:08:53.404386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.528 [2024-07-24 20:08:53.404414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.528 qpair failed and we were unable to recover it. 00:29:05.528 [2024-07-24 20:08:53.404832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.528 [2024-07-24 20:08:53.404841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.528 qpair failed and we were unable to recover it. 00:29:05.528 [2024-07-24 20:08:53.405171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.528 [2024-07-24 20:08:53.405179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.528 qpair failed and we were unable to recover it. 00:29:05.528 [2024-07-24 20:08:53.405595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.528 [2024-07-24 20:08:53.405603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.528 qpair failed and we were unable to recover it. 00:29:05.528 [2024-07-24 20:08:53.406011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.528 [2024-07-24 20:08:53.406018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.528 qpair failed and we were unable to recover it. 00:29:05.529 [2024-07-24 20:08:53.406532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.529 [2024-07-24 20:08:53.406559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.529 qpair failed and we were unable to recover it. 00:29:05.529 [2024-07-24 20:08:53.407017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.529 [2024-07-24 20:08:53.407028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.529 qpair failed and we were unable to recover it. 00:29:05.529 [2024-07-24 20:08:53.407441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.529 [2024-07-24 20:08:53.407469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.529 qpair failed and we were unable to recover it. 00:29:05.529 [2024-07-24 20:08:53.407884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.529 [2024-07-24 20:08:53.407893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.529 qpair failed and we were unable to recover it. 00:29:05.529 [2024-07-24 20:08:53.408400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.529 [2024-07-24 20:08:53.408428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.529 qpair failed and we were unable to recover it. 00:29:05.529 [2024-07-24 20:08:53.408881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.529 [2024-07-24 20:08:53.408889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.529 qpair failed and we were unable to recover it. 00:29:05.529 [2024-07-24 20:08:53.409455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.529 [2024-07-24 20:08:53.409483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.529 qpair failed and we were unable to recover it. 00:29:05.529 [2024-07-24 20:08:53.409900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.529 [2024-07-24 20:08:53.409909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.529 qpair failed and we were unable to recover it. 00:29:05.529 [2024-07-24 20:08:53.410318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.529 [2024-07-24 20:08:53.410325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.529 qpair failed and we were unable to recover it. 00:29:05.529 [2024-07-24 20:08:53.410746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.529 [2024-07-24 20:08:53.410753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.529 qpair failed and we were unable to recover it. 00:29:05.529 [2024-07-24 20:08:53.411153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.529 [2024-07-24 20:08:53.411160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.529 qpair failed and we were unable to recover it. 00:29:05.529 [2024-07-24 20:08:53.411458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.529 [2024-07-24 20:08:53.411466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.529 qpair failed and we were unable to recover it. 00:29:05.529 [2024-07-24 20:08:53.411890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.529 [2024-07-24 20:08:53.411898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.529 qpair failed and we were unable to recover it. 00:29:05.529 [2024-07-24 20:08:53.412339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.529 [2024-07-24 20:08:53.412346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.529 qpair failed and we were unable to recover it. 00:29:05.529 [2024-07-24 20:08:53.412819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.529 [2024-07-24 20:08:53.412825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.529 qpair failed and we were unable to recover it. 00:29:05.529 [2024-07-24 20:08:53.413220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.529 [2024-07-24 20:08:53.413227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.529 qpair failed and we were unable to recover it. 00:29:05.529 [2024-07-24 20:08:53.413652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.529 [2024-07-24 20:08:53.413659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.529 qpair failed and we were unable to recover it. 00:29:05.529 [2024-07-24 20:08:53.414065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.529 [2024-07-24 20:08:53.414073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.529 qpair failed and we were unable to recover it. 00:29:05.529 [2024-07-24 20:08:53.414496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.529 [2024-07-24 20:08:53.414504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.529 qpair failed and we were unable to recover it. 00:29:05.529 [2024-07-24 20:08:53.414922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.529 [2024-07-24 20:08:53.414928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.529 qpair failed and we were unable to recover it. 00:29:05.529 [2024-07-24 20:08:53.415478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.529 [2024-07-24 20:08:53.415506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.529 qpair failed and we were unable to recover it. 00:29:05.529 [2024-07-24 20:08:53.415924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.529 [2024-07-24 20:08:53.415933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.529 qpair failed and we were unable to recover it. 00:29:05.529 [2024-07-24 20:08:53.416456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.529 [2024-07-24 20:08:53.416484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.529 qpair failed and we were unable to recover it. 00:29:05.529 [2024-07-24 20:08:53.416901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.529 [2024-07-24 20:08:53.416910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.529 qpair failed and we were unable to recover it. 00:29:05.529 [2024-07-24 20:08:53.417241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.529 [2024-07-24 20:08:53.417249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.529 qpair failed and we were unable to recover it. 00:29:05.529 [2024-07-24 20:08:53.417702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.530 [2024-07-24 20:08:53.417709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.530 qpair failed and we were unable to recover it. 00:29:05.530 [2024-07-24 20:08:53.418108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.530 [2024-07-24 20:08:53.418115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.530 qpair failed and we were unable to recover it. 00:29:05.530 [2024-07-24 20:08:53.418610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.530 [2024-07-24 20:08:53.418616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.530 qpair failed and we were unable to recover it. 00:29:05.530 [2024-07-24 20:08:53.419017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.530 [2024-07-24 20:08:53.419023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.530 qpair failed and we were unable to recover it. 00:29:05.530 [2024-07-24 20:08:53.419563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.530 [2024-07-24 20:08:53.419590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.530 qpair failed and we were unable to recover it. 00:29:05.530 [2024-07-24 20:08:53.420018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.530 [2024-07-24 20:08:53.420027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.530 qpair failed and we were unable to recover it. 00:29:05.530 [2024-07-24 20:08:53.420525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.530 [2024-07-24 20:08:53.420553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.530 qpair failed and we were unable to recover it. 00:29:05.530 [2024-07-24 20:08:53.420969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.530 [2024-07-24 20:08:53.420978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.530 qpair failed and we were unable to recover it. 00:29:05.530 [2024-07-24 20:08:53.421488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.530 [2024-07-24 20:08:53.421516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.530 qpair failed and we were unable to recover it. 00:29:05.530 [2024-07-24 20:08:53.421940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.530 [2024-07-24 20:08:53.421949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.530 qpair failed and we were unable to recover it. 00:29:05.530 [2024-07-24 20:08:53.422472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.530 [2024-07-24 20:08:53.422500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.530 qpair failed and we were unable to recover it. 00:29:05.530 [2024-07-24 20:08:53.422917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.530 [2024-07-24 20:08:53.422925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.530 qpair failed and we were unable to recover it. 00:29:05.530 [2024-07-24 20:08:53.423510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.530 [2024-07-24 20:08:53.423538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.530 qpair failed and we were unable to recover it. 00:29:05.530 [2024-07-24 20:08:53.423951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.530 [2024-07-24 20:08:53.423960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.530 qpair failed and we were unable to recover it. 00:29:05.530 [2024-07-24 20:08:53.424416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.530 [2024-07-24 20:08:53.424443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.530 qpair failed and we were unable to recover it. 00:29:05.530 [2024-07-24 20:08:53.424863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.530 [2024-07-24 20:08:53.424871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.530 qpair failed and we were unable to recover it. 00:29:05.530 [2024-07-24 20:08:53.425275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.530 [2024-07-24 20:08:53.425282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.530 qpair failed and we were unable to recover it. 00:29:05.530 [2024-07-24 20:08:53.425696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.530 [2024-07-24 20:08:53.425704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.530 qpair failed and we were unable to recover it. 00:29:05.530 [2024-07-24 20:08:53.425924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.530 [2024-07-24 20:08:53.425935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.530 qpair failed and we were unable to recover it. 00:29:05.530 [2024-07-24 20:08:53.426261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.530 [2024-07-24 20:08:53.426269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.530 qpair failed and we were unable to recover it. 00:29:05.530 [2024-07-24 20:08:53.426727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.530 [2024-07-24 20:08:53.426733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.530 qpair failed and we were unable to recover it. 00:29:05.530 [2024-07-24 20:08:53.427126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.530 [2024-07-24 20:08:53.427133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.530 qpair failed and we were unable to recover it. 00:29:05.530 [2024-07-24 20:08:53.427548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.530 [2024-07-24 20:08:53.427555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.530 qpair failed and we were unable to recover it. 00:29:05.530 [2024-07-24 20:08:53.427755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.530 [2024-07-24 20:08:53.427763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.530 qpair failed and we were unable to recover it. 00:29:05.530 [2024-07-24 20:08:53.428193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.530 [2024-07-24 20:08:53.428208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.530 qpair failed and we were unable to recover it. 00:29:05.530 [2024-07-24 20:08:53.428608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.530 [2024-07-24 20:08:53.428615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.530 qpair failed and we were unable to recover it. 00:29:05.530 [2024-07-24 20:08:53.428892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.530 [2024-07-24 20:08:53.428900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.530 qpair failed and we were unable to recover it. 00:29:05.530 [2024-07-24 20:08:53.429322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.530 [2024-07-24 20:08:53.429329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.530 qpair failed and we were unable to recover it. 00:29:05.530 [2024-07-24 20:08:53.429745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.530 [2024-07-24 20:08:53.429752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.530 qpair failed and we were unable to recover it. 00:29:05.530 [2024-07-24 20:08:53.430162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.530 [2024-07-24 20:08:53.430169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.530 qpair failed and we were unable to recover it. 00:29:05.530 [2024-07-24 20:08:53.430616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.530 [2024-07-24 20:08:53.430624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.530 qpair failed and we were unable to recover it. 00:29:05.530 [2024-07-24 20:08:53.431022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.530 [2024-07-24 20:08:53.431029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.530 qpair failed and we were unable to recover it. 00:29:05.530 [2024-07-24 20:08:53.431544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.530 [2024-07-24 20:08:53.431572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.530 qpair failed and we were unable to recover it. 00:29:05.530 [2024-07-24 20:08:53.431987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.530 [2024-07-24 20:08:53.431996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.530 qpair failed and we were unable to recover it. 00:29:05.530 [2024-07-24 20:08:53.432503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.530 [2024-07-24 20:08:53.432531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.530 qpair failed and we were unable to recover it. 00:29:05.530 [2024-07-24 20:08:53.432951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.530 [2024-07-24 20:08:53.432960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.530 qpair failed and we were unable to recover it. 00:29:05.530 [2024-07-24 20:08:53.433380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.530 [2024-07-24 20:08:53.433407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.530 qpair failed and we were unable to recover it. 00:29:05.530 [2024-07-24 20:08:53.433828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.530 [2024-07-24 20:08:53.433837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.530 qpair failed and we were unable to recover it. 00:29:05.530 [2024-07-24 20:08:53.434242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.531 [2024-07-24 20:08:53.434249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.531 qpair failed and we were unable to recover it. 00:29:05.531 [2024-07-24 20:08:53.434665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.531 [2024-07-24 20:08:53.434672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.531 qpair failed and we were unable to recover it. 00:29:05.531 [2024-07-24 20:08:53.434988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.531 [2024-07-24 20:08:53.434994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.531 qpair failed and we were unable to recover it. 00:29:05.531 [2024-07-24 20:08:53.435382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.531 [2024-07-24 20:08:53.435390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.531 qpair failed and we were unable to recover it. 00:29:05.531 [2024-07-24 20:08:53.435815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.531 [2024-07-24 20:08:53.435821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.531 qpair failed and we were unable to recover it. 00:29:05.531 [2024-07-24 20:08:53.436270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.531 [2024-07-24 20:08:53.436281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.531 qpair failed and we were unable to recover it. 00:29:05.531 [2024-07-24 20:08:53.436692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.531 [2024-07-24 20:08:53.436698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.531 qpair failed and we were unable to recover it. 00:29:05.531 [2024-07-24 20:08:53.437028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.531 [2024-07-24 20:08:53.437035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.531 qpair failed and we were unable to recover it. 00:29:05.531 [2024-07-24 20:08:53.437465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.531 [2024-07-24 20:08:53.437472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.531 qpair failed and we were unable to recover it. 00:29:05.531 [2024-07-24 20:08:53.437900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.531 [2024-07-24 20:08:53.437907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.531 qpair failed and we were unable to recover it. 00:29:05.531 [2024-07-24 20:08:53.438349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.531 [2024-07-24 20:08:53.438356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.531 qpair failed and we were unable to recover it. 00:29:05.531 [2024-07-24 20:08:53.438768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.531 [2024-07-24 20:08:53.438775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.531 qpair failed and we were unable to recover it. 00:29:05.531 [2024-07-24 20:08:53.439183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.531 [2024-07-24 20:08:53.439190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.531 qpair failed and we were unable to recover it. 00:29:05.531 [2024-07-24 20:08:53.439618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.531 [2024-07-24 20:08:53.439625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.531 qpair failed and we were unable to recover it. 00:29:05.531 [2024-07-24 20:08:53.440027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.531 [2024-07-24 20:08:53.440034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.531 qpair failed and we were unable to recover it. 00:29:05.531 [2024-07-24 20:08:53.440550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.531 [2024-07-24 20:08:53.440578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.531 qpair failed and we were unable to recover it. 00:29:05.531 [2024-07-24 20:08:53.441485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.531 [2024-07-24 20:08:53.441501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.531 qpair failed and we were unable to recover it. 00:29:05.531 [2024-07-24 20:08:53.441822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.531 [2024-07-24 20:08:53.441829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.531 qpair failed and we were unable to recover it. 00:29:05.531 [2024-07-24 20:08:53.442406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.531 [2024-07-24 20:08:53.442434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.531 qpair failed and we were unable to recover it. 00:29:05.531 [2024-07-24 20:08:53.442852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.531 [2024-07-24 20:08:53.442860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.531 qpair failed and we were unable to recover it. 00:29:05.531 [2024-07-24 20:08:53.443305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.531 [2024-07-24 20:08:53.443312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.531 qpair failed and we were unable to recover it. 00:29:05.531 [2024-07-24 20:08:53.443744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.531 [2024-07-24 20:08:53.443751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.531 qpair failed and we were unable to recover it. 00:29:05.531 [2024-07-24 20:08:53.444154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.531 [2024-07-24 20:08:53.444161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.531 qpair failed and we were unable to recover it. 00:29:05.531 [2024-07-24 20:08:53.444577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.531 [2024-07-24 20:08:53.444585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.531 qpair failed and we were unable to recover it. 00:29:05.531 [2024-07-24 20:08:53.444989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.531 [2024-07-24 20:08:53.444996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.531 qpair failed and we were unable to recover it. 00:29:05.531 [2024-07-24 20:08:53.445524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.531 [2024-07-24 20:08:53.445553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.531 qpair failed and we were unable to recover it. 00:29:05.531 [2024-07-24 20:08:53.446003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.531 [2024-07-24 20:08:53.446013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.531 qpair failed and we were unable to recover it. 00:29:05.531 [2024-07-24 20:08:53.446524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.531 [2024-07-24 20:08:53.446552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.531 qpair failed and we were unable to recover it. 00:29:05.799 [2024-07-24 20:08:53.446968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.799 [2024-07-24 20:08:53.446979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.799 qpair failed and we were unable to recover it. 00:29:05.799 [2024-07-24 20:08:53.447507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.799 [2024-07-24 20:08:53.447535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.799 qpair failed and we were unable to recover it. 00:29:05.799 [2024-07-24 20:08:53.447953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.799 [2024-07-24 20:08:53.447961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.799 qpair failed and we were unable to recover it. 00:29:05.799 [2024-07-24 20:08:53.448386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.799 [2024-07-24 20:08:53.448413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.799 qpair failed and we were unable to recover it. 00:29:05.799 [2024-07-24 20:08:53.448828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.799 [2024-07-24 20:08:53.448837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.799 qpair failed and we were unable to recover it. 00:29:05.799 [2024-07-24 20:08:53.449170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.799 [2024-07-24 20:08:53.449177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.799 qpair failed and we were unable to recover it. 00:29:05.800 [2024-07-24 20:08:53.449615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.800 [2024-07-24 20:08:53.449622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.800 qpair failed and we were unable to recover it. 00:29:05.800 [2024-07-24 20:08:53.450021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.800 [2024-07-24 20:08:53.450029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.800 qpair failed and we were unable to recover it. 00:29:05.800 [2024-07-24 20:08:53.450524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.800 [2024-07-24 20:08:53.450551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.800 qpair failed and we were unable to recover it. 00:29:05.800 [2024-07-24 20:08:53.450972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.800 [2024-07-24 20:08:53.450980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.800 qpair failed and we were unable to recover it. 00:29:05.800 [2024-07-24 20:08:53.451473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.800 [2024-07-24 20:08:53.451501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.800 qpair failed and we were unable to recover it. 00:29:05.800 [2024-07-24 20:08:53.451916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.800 [2024-07-24 20:08:53.451925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.800 qpair failed and we were unable to recover it. 00:29:05.800 [2024-07-24 20:08:53.452459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.800 [2024-07-24 20:08:53.452487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.800 qpair failed and we were unable to recover it. 00:29:05.800 [2024-07-24 20:08:53.452825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.800 [2024-07-24 20:08:53.452834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.800 qpair failed and we were unable to recover it. 00:29:05.800 [2024-07-24 20:08:53.453272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.800 [2024-07-24 20:08:53.453279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.800 qpair failed and we were unable to recover it. 00:29:05.800 [2024-07-24 20:08:53.453616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.800 [2024-07-24 20:08:53.453623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.800 qpair failed and we were unable to recover it. 00:29:05.800 [2024-07-24 20:08:53.454045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.800 [2024-07-24 20:08:53.454052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.800 qpair failed and we were unable to recover it. 00:29:05.800 [2024-07-24 20:08:53.454566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.800 [2024-07-24 20:08:53.454576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.800 qpair failed and we were unable to recover it. 00:29:05.800 [2024-07-24 20:08:53.454985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.800 [2024-07-24 20:08:53.454992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.800 qpair failed and we were unable to recover it. 00:29:05.800 [2024-07-24 20:08:53.455440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.800 [2024-07-24 20:08:53.455468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.800 qpair failed and we were unable to recover it. 00:29:05.800 [2024-07-24 20:08:53.455889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.800 [2024-07-24 20:08:53.455897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.800 qpair failed and we were unable to recover it. 00:29:05.800 [2024-07-24 20:08:53.456406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.800 [2024-07-24 20:08:53.456434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.800 qpair failed and we were unable to recover it. 00:29:05.800 [2024-07-24 20:08:53.456923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.800 [2024-07-24 20:08:53.456931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.800 qpair failed and we were unable to recover it. 00:29:05.800 [2024-07-24 20:08:53.457402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.800 [2024-07-24 20:08:53.457430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.800 qpair failed and we were unable to recover it. 00:29:05.800 [2024-07-24 20:08:53.457853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.800 [2024-07-24 20:08:53.457862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.800 qpair failed and we were unable to recover it. 00:29:05.800 [2024-07-24 20:08:53.458271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.800 [2024-07-24 20:08:53.458278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.800 qpair failed and we were unable to recover it. 00:29:05.800 [2024-07-24 20:08:53.458770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.800 [2024-07-24 20:08:53.458777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.800 qpair failed and we were unable to recover it. 00:29:05.800 [2024-07-24 20:08:53.459178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.800 [2024-07-24 20:08:53.459185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.800 qpair failed and we were unable to recover it. 00:29:05.800 [2024-07-24 20:08:53.459523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.800 [2024-07-24 20:08:53.459529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.800 qpair failed and we were unable to recover it. 00:29:05.800 [2024-07-24 20:08:53.460003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.800 [2024-07-24 20:08:53.460009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.800 qpair failed and we were unable to recover it. 00:29:05.800 [2024-07-24 20:08:53.460519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.800 [2024-07-24 20:08:53.460548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.800 qpair failed and we were unable to recover it. 00:29:05.800 [2024-07-24 20:08:53.460983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.800 [2024-07-24 20:08:53.460992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.800 qpair failed and we were unable to recover it. 00:29:05.800 [2024-07-24 20:08:53.461533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.800 [2024-07-24 20:08:53.461560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.800 qpair failed and we were unable to recover it. 00:29:05.800 [2024-07-24 20:08:53.461988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.800 [2024-07-24 20:08:53.461996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.800 qpair failed and we were unable to recover it. 00:29:05.800 [2024-07-24 20:08:53.462510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.800 [2024-07-24 20:08:53.462537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.800 qpair failed and we were unable to recover it. 00:29:05.800 [2024-07-24 20:08:53.462999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.800 [2024-07-24 20:08:53.463009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.800 qpair failed and we were unable to recover it. 00:29:05.800 [2024-07-24 20:08:53.463510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.800 [2024-07-24 20:08:53.463538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.800 qpair failed and we were unable to recover it. 00:29:05.800 [2024-07-24 20:08:53.463957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.800 [2024-07-24 20:08:53.463965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.800 qpair failed and we were unable to recover it. 00:29:05.800 [2024-07-24 20:08:53.464478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.800 [2024-07-24 20:08:53.464505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.800 qpair failed and we were unable to recover it. 00:29:05.800 [2024-07-24 20:08:53.464918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.800 [2024-07-24 20:08:53.464927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.800 qpair failed and we were unable to recover it. 00:29:05.800 [2024-07-24 20:08:53.465443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.800 [2024-07-24 20:08:53.465471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.800 qpair failed and we were unable to recover it. 00:29:05.800 [2024-07-24 20:08:53.465885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.800 [2024-07-24 20:08:53.465893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.800 qpair failed and we were unable to recover it. 00:29:05.800 [2024-07-24 20:08:53.466349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.800 [2024-07-24 20:08:53.466357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.800 qpair failed and we were unable to recover it. 00:29:05.800 [2024-07-24 20:08:53.466762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.801 [2024-07-24 20:08:53.466769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.801 qpair failed and we were unable to recover it. 00:29:05.801 [2024-07-24 20:08:53.467170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.801 [2024-07-24 20:08:53.467177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.801 qpair failed and we were unable to recover it. 00:29:05.801 [2024-07-24 20:08:53.467587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.801 [2024-07-24 20:08:53.467594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.801 qpair failed and we were unable to recover it. 00:29:05.801 [2024-07-24 20:08:53.467999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.801 [2024-07-24 20:08:53.468006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.801 qpair failed and we were unable to recover it. 00:29:05.801 [2024-07-24 20:08:53.468530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.801 [2024-07-24 20:08:53.468558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.801 qpair failed and we were unable to recover it. 00:29:05.801 [2024-07-24 20:08:53.468974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.801 [2024-07-24 20:08:53.468983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.801 qpair failed and we were unable to recover it. 00:29:05.801 [2024-07-24 20:08:53.469501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.801 [2024-07-24 20:08:53.469529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.801 qpair failed and we were unable to recover it. 00:29:05.801 [2024-07-24 20:08:53.469982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.801 [2024-07-24 20:08:53.469990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.801 qpair failed and we were unable to recover it. 00:29:05.801 [2024-07-24 20:08:53.470504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.801 [2024-07-24 20:08:53.470531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.801 qpair failed and we were unable to recover it. 00:29:05.801 [2024-07-24 20:08:53.470948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.801 [2024-07-24 20:08:53.470957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.801 qpair failed and we were unable to recover it. 00:29:05.801 [2024-07-24 20:08:53.471492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.801 [2024-07-24 20:08:53.471520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.801 qpair failed and we were unable to recover it. 00:29:05.801 [2024-07-24 20:08:53.471765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.801 [2024-07-24 20:08:53.471774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.801 qpair failed and we were unable to recover it. 00:29:05.801 [2024-07-24 20:08:53.472148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.801 [2024-07-24 20:08:53.472156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.801 qpair failed and we were unable to recover it. 00:29:05.801 [2024-07-24 20:08:53.472605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.801 [2024-07-24 20:08:53.472612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.801 qpair failed and we were unable to recover it. 00:29:05.801 [2024-07-24 20:08:53.472960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.801 [2024-07-24 20:08:53.472970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.801 qpair failed and we were unable to recover it. 00:29:05.801 [2024-07-24 20:08:53.473143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.801 [2024-07-24 20:08:53.473152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.801 qpair failed and we were unable to recover it. 00:29:05.801 [2024-07-24 20:08:53.473601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.801 [2024-07-24 20:08:53.473609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.801 qpair failed and we were unable to recover it. 00:29:05.801 [2024-07-24 20:08:53.474049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.801 [2024-07-24 20:08:53.474056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.801 qpair failed and we were unable to recover it. 00:29:05.801 [2024-07-24 20:08:53.474594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.801 [2024-07-24 20:08:53.474622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.801 qpair failed and we were unable to recover it. 00:29:05.801 [2024-07-24 20:08:53.475083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.801 [2024-07-24 20:08:53.475091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.801 qpair failed and we were unable to recover it. 00:29:05.801 [2024-07-24 20:08:53.475372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.801 [2024-07-24 20:08:53.475381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.801 qpair failed and we were unable to recover it. 00:29:05.801 [2024-07-24 20:08:53.475884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.801 [2024-07-24 20:08:53.475891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.801 qpair failed and we were unable to recover it. 00:29:05.801 [2024-07-24 20:08:53.476291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.801 [2024-07-24 20:08:53.476298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.801 qpair failed and we were unable to recover it. 00:29:05.801 [2024-07-24 20:08:53.476749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.801 [2024-07-24 20:08:53.476756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.801 qpair failed and we were unable to recover it. 00:29:05.801 [2024-07-24 20:08:53.477161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.801 [2024-07-24 20:08:53.477167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.801 qpair failed and we were unable to recover it. 00:29:05.801 [2024-07-24 20:08:53.477643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.801 [2024-07-24 20:08:53.477651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.801 qpair failed and we were unable to recover it. 00:29:05.801 [2024-07-24 20:08:53.478059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.801 [2024-07-24 20:08:53.478066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.801 qpair failed and we were unable to recover it. 00:29:05.801 [2024-07-24 20:08:53.478571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.801 [2024-07-24 20:08:53.478599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.801 qpair failed and we were unable to recover it. 00:29:05.801 [2024-07-24 20:08:53.479030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.801 [2024-07-24 20:08:53.479038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.801 qpair failed and we were unable to recover it. 00:29:05.801 [2024-07-24 20:08:53.479580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.801 [2024-07-24 20:08:53.479608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.801 qpair failed and we were unable to recover it. 00:29:05.801 [2024-07-24 20:08:53.480023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.801 [2024-07-24 20:08:53.480031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.801 qpair failed and we were unable to recover it. 00:29:05.801 [2024-07-24 20:08:53.480515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.801 [2024-07-24 20:08:53.480542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.801 qpair failed and we were unable to recover it. 00:29:05.801 [2024-07-24 20:08:53.480960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.801 [2024-07-24 20:08:53.480968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.801 qpair failed and we were unable to recover it. 00:29:05.801 [2024-07-24 20:08:53.481492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.801 [2024-07-24 20:08:53.481520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.801 qpair failed and we were unable to recover it. 00:29:05.801 [2024-07-24 20:08:53.481946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.801 [2024-07-24 20:08:53.481955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.801 qpair failed and we were unable to recover it. 00:29:05.801 [2024-07-24 20:08:53.482478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.801 [2024-07-24 20:08:53.482505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.801 qpair failed and we were unable to recover it. 00:29:05.801 [2024-07-24 20:08:53.482922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.801 [2024-07-24 20:08:53.482930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.801 qpair failed and we were unable to recover it. 00:29:05.801 [2024-07-24 20:08:53.483433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.801 [2024-07-24 20:08:53.483461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.801 qpair failed and we were unable to recover it. 00:29:05.801 [2024-07-24 20:08:53.483876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.802 [2024-07-24 20:08:53.483885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.802 qpair failed and we were unable to recover it. 00:29:05.802 [2024-07-24 20:08:53.484295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.802 [2024-07-24 20:08:53.484303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.802 qpair failed and we were unable to recover it. 00:29:05.802 [2024-07-24 20:08:53.484713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.802 [2024-07-24 20:08:53.484719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.802 qpair failed and we were unable to recover it. 00:29:05.802 [2024-07-24 20:08:53.485128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.802 [2024-07-24 20:08:53.485135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.802 qpair failed and we were unable to recover it. 00:29:05.802 [2024-07-24 20:08:53.485560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.802 [2024-07-24 20:08:53.485567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.802 qpair failed and we were unable to recover it. 00:29:05.802 [2024-07-24 20:08:53.486004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.802 [2024-07-24 20:08:53.486011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.802 qpair failed and we were unable to recover it. 00:29:05.802 [2024-07-24 20:08:53.486503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.802 [2024-07-24 20:08:53.486530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.802 qpair failed and we were unable to recover it. 00:29:05.802 [2024-07-24 20:08:53.486948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.802 [2024-07-24 20:08:53.486957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.802 qpair failed and we were unable to recover it. 00:29:05.802 [2024-07-24 20:08:53.487476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.802 [2024-07-24 20:08:53.487504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.802 qpair failed and we were unable to recover it. 00:29:05.802 [2024-07-24 20:08:53.487948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.802 [2024-07-24 20:08:53.487957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.802 qpair failed and we were unable to recover it. 00:29:05.802 [2024-07-24 20:08:53.488481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.802 [2024-07-24 20:08:53.488508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.802 qpair failed and we were unable to recover it. 00:29:05.802 [2024-07-24 20:08:53.488927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.802 [2024-07-24 20:08:53.488935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.802 qpair failed and we were unable to recover it. 00:29:05.802 [2024-07-24 20:08:53.489516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.802 [2024-07-24 20:08:53.489544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.802 qpair failed and we were unable to recover it. 00:29:05.802 [2024-07-24 20:08:53.489958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.802 [2024-07-24 20:08:53.489968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.802 qpair failed and we were unable to recover it. 00:29:05.802 [2024-07-24 20:08:53.490501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.802 [2024-07-24 20:08:53.490529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.802 qpair failed and we were unable to recover it. 00:29:05.802 [2024-07-24 20:08:53.490993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.802 [2024-07-24 20:08:53.491001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.802 qpair failed and we were unable to recover it. 00:29:05.802 [2024-07-24 20:08:53.491496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.802 [2024-07-24 20:08:53.491526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.802 qpair failed and we were unable to recover it. 00:29:05.802 [2024-07-24 20:08:53.491938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.802 [2024-07-24 20:08:53.491947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.802 qpair failed and we were unable to recover it. 00:29:05.802 [2024-07-24 20:08:53.492480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.802 [2024-07-24 20:08:53.492508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.802 qpair failed and we were unable to recover it. 00:29:05.802 [2024-07-24 20:08:53.492928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.802 [2024-07-24 20:08:53.492937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.802 qpair failed and we were unable to recover it. 00:29:05.802 [2024-07-24 20:08:53.493413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.802 [2024-07-24 20:08:53.493441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.802 qpair failed and we were unable to recover it. 00:29:05.802 [2024-07-24 20:08:53.493890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.802 [2024-07-24 20:08:53.493899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.802 qpair failed and we were unable to recover it. 00:29:05.802 [2024-07-24 20:08:53.494304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.802 [2024-07-24 20:08:53.494312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.802 qpair failed and we were unable to recover it. 00:29:05.802 [2024-07-24 20:08:53.494754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.802 [2024-07-24 20:08:53.494761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.802 qpair failed and we were unable to recover it. 00:29:05.802 [2024-07-24 20:08:53.495209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.802 [2024-07-24 20:08:53.495217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.802 qpair failed and we were unable to recover it. 00:29:05.802 [2024-07-24 20:08:53.495525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.802 [2024-07-24 20:08:53.495531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.802 qpair failed and we were unable to recover it. 00:29:05.802 [2024-07-24 20:08:53.495951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.802 [2024-07-24 20:08:53.495958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.802 qpair failed and we were unable to recover it. 00:29:05.802 [2024-07-24 20:08:53.496470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.802 [2024-07-24 20:08:53.496499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.802 qpair failed and we were unable to recover it. 00:29:05.802 [2024-07-24 20:08:53.496916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.802 [2024-07-24 20:08:53.496925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.802 qpair failed and we were unable to recover it. 00:29:05.802 [2024-07-24 20:08:53.497425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.802 [2024-07-24 20:08:53.497453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.802 qpair failed and we were unable to recover it. 00:29:05.802 [2024-07-24 20:08:53.497870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.802 [2024-07-24 20:08:53.497878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.802 qpair failed and we were unable to recover it. 00:29:05.802 [2024-07-24 20:08:53.498322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.802 [2024-07-24 20:08:53.498330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.802 qpair failed and we were unable to recover it. 00:29:05.802 [2024-07-24 20:08:53.498747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.802 [2024-07-24 20:08:53.498753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.802 qpair failed and we were unable to recover it. 00:29:05.802 [2024-07-24 20:08:53.499158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.802 [2024-07-24 20:08:53.499164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.802 qpair failed and we were unable to recover it. 00:29:05.802 [2024-07-24 20:08:53.499571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.802 [2024-07-24 20:08:53.499577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.802 qpair failed and we were unable to recover it. 00:29:05.802 [2024-07-24 20:08:53.500004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.802 [2024-07-24 20:08:53.500011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.802 qpair failed and we were unable to recover it. 00:29:05.802 [2024-07-24 20:08:53.500546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.802 [2024-07-24 20:08:53.500574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.802 qpair failed and we were unable to recover it. 00:29:05.802 [2024-07-24 20:08:53.500990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.802 [2024-07-24 20:08:53.500999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.802 qpair failed and we were unable to recover it. 00:29:05.803 [2024-07-24 20:08:53.501523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.803 [2024-07-24 20:08:53.501550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.803 qpair failed and we were unable to recover it. 00:29:05.803 [2024-07-24 20:08:53.501882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.803 [2024-07-24 20:08:53.501893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.803 qpair failed and we were unable to recover it. 00:29:05.803 [2024-07-24 20:08:53.502321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.803 [2024-07-24 20:08:53.502330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.803 qpair failed and we were unable to recover it. 00:29:05.803 [2024-07-24 20:08:53.502771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.803 [2024-07-24 20:08:53.502778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.803 qpair failed and we were unable to recover it. 00:29:05.803 [2024-07-24 20:08:53.503210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.803 [2024-07-24 20:08:53.503217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.803 qpair failed and we were unable to recover it. 00:29:05.803 [2024-07-24 20:08:53.503550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.803 [2024-07-24 20:08:53.503557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.803 qpair failed and we were unable to recover it. 00:29:05.803 [2024-07-24 20:08:53.503935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.803 [2024-07-24 20:08:53.503941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.803 qpair failed and we were unable to recover it. 00:29:05.803 [2024-07-24 20:08:53.504263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.803 [2024-07-24 20:08:53.504271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.803 qpair failed and we were unable to recover it. 00:29:05.803 [2024-07-24 20:08:53.504583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.803 [2024-07-24 20:08:53.504589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.803 qpair failed and we were unable to recover it. 00:29:05.803 [2024-07-24 20:08:53.504909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.803 [2024-07-24 20:08:53.504916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.803 qpair failed and we were unable to recover it. 00:29:05.803 [2024-07-24 20:08:53.505342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.803 [2024-07-24 20:08:53.505349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.803 qpair failed and we were unable to recover it. 00:29:05.803 [2024-07-24 20:08:53.505560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.803 [2024-07-24 20:08:53.505570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.803 qpair failed and we were unable to recover it. 00:29:05.803 [2024-07-24 20:08:53.505956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.803 [2024-07-24 20:08:53.505963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.803 qpair failed and we were unable to recover it. 00:29:05.803 [2024-07-24 20:08:53.506366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.803 [2024-07-24 20:08:53.506374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.803 qpair failed and we were unable to recover it. 00:29:05.803 [2024-07-24 20:08:53.506784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.803 [2024-07-24 20:08:53.506791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.803 qpair failed and we were unable to recover it. 00:29:05.803 [2024-07-24 20:08:53.507242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.803 [2024-07-24 20:08:53.507249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.803 qpair failed and we were unable to recover it. 00:29:05.803 [2024-07-24 20:08:53.507448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.803 [2024-07-24 20:08:53.507456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.803 qpair failed and we were unable to recover it. 00:29:05.803 [2024-07-24 20:08:53.507890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.803 [2024-07-24 20:08:53.507897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.803 qpair failed and we were unable to recover it. 00:29:05.803 [2024-07-24 20:08:53.508301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.803 [2024-07-24 20:08:53.508311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.803 qpair failed and we were unable to recover it. 00:29:05.803 [2024-07-24 20:08:53.508756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.803 [2024-07-24 20:08:53.508763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.803 qpair failed and we were unable to recover it. 00:29:05.803 [2024-07-24 20:08:53.509186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.803 [2024-07-24 20:08:53.509193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.803 qpair failed and we were unable to recover it. 00:29:05.803 [2024-07-24 20:08:53.509617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.803 [2024-07-24 20:08:53.509624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.803 qpair failed and we were unable to recover it. 00:29:05.803 [2024-07-24 20:08:53.510028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.803 [2024-07-24 20:08:53.510035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.803 qpair failed and we were unable to recover it. 00:29:05.803 [2024-07-24 20:08:53.510448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.803 [2024-07-24 20:08:53.510456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.803 qpair failed and we were unable to recover it. 00:29:05.803 [2024-07-24 20:08:53.510858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.803 [2024-07-24 20:08:53.510865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.803 qpair failed and we were unable to recover it. 00:29:05.803 [2024-07-24 20:08:53.511142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.803 [2024-07-24 20:08:53.511149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.803 qpair failed and we were unable to recover it. 00:29:05.803 [2024-07-24 20:08:53.511616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.803 [2024-07-24 20:08:53.511623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.803 qpair failed and we were unable to recover it. 00:29:05.803 [2024-07-24 20:08:53.512021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.803 [2024-07-24 20:08:53.512027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.803 qpair failed and we were unable to recover it. 00:29:05.803 [2024-07-24 20:08:53.512559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.803 [2024-07-24 20:08:53.512587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.803 qpair failed and we were unable to recover it. 00:29:05.803 [2024-07-24 20:08:53.513000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.803 [2024-07-24 20:08:53.513008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.803 qpair failed and we were unable to recover it. 00:29:05.803 [2024-07-24 20:08:53.513517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.803 [2024-07-24 20:08:53.513545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.803 qpair failed and we were unable to recover it. 00:29:05.803 [2024-07-24 20:08:53.513966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.803 [2024-07-24 20:08:53.513974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.803 qpair failed and we were unable to recover it. 00:29:05.803 [2024-07-24 20:08:53.514532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.803 [2024-07-24 20:08:53.514561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.803 qpair failed and we were unable to recover it. 00:29:05.803 [2024-07-24 20:08:53.514977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.803 [2024-07-24 20:08:53.514985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.803 qpair failed and we were unable to recover it. 00:29:05.803 [2024-07-24 20:08:53.515487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.803 [2024-07-24 20:08:53.515514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.803 qpair failed and we were unable to recover it. 00:29:05.803 [2024-07-24 20:08:53.515930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.803 [2024-07-24 20:08:53.515939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.803 qpair failed and we were unable to recover it. 00:29:05.803 [2024-07-24 20:08:53.516474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.803 [2024-07-24 20:08:53.516502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.803 qpair failed and we were unable to recover it. 00:29:05.803 [2024-07-24 20:08:53.516920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.803 [2024-07-24 20:08:53.516928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.804 qpair failed and we were unable to recover it. 00:29:05.804 [2024-07-24 20:08:53.517439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-24 20:08:53.517466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.804 qpair failed and we were unable to recover it. 00:29:05.804 [2024-07-24 20:08:53.517890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-24 20:08:53.517899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.804 qpair failed and we were unable to recover it. 00:29:05.804 [2024-07-24 20:08:53.518338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-24 20:08:53.518346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.804 qpair failed and we were unable to recover it. 00:29:05.804 [2024-07-24 20:08:53.518768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-24 20:08:53.518775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.804 qpair failed and we were unable to recover it. 00:29:05.804 [2024-07-24 20:08:53.519174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-24 20:08:53.519180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.804 qpair failed and we were unable to recover it. 00:29:05.804 [2024-07-24 20:08:53.519595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-24 20:08:53.519602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.804 qpair failed and we were unable to recover it. 00:29:05.804 [2024-07-24 20:08:53.520048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-24 20:08:53.520055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.804 qpair failed and we were unable to recover it. 00:29:05.804 [2024-07-24 20:08:53.520486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-24 20:08:53.520514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.804 qpair failed and we were unable to recover it. 00:29:05.804 [2024-07-24 20:08:53.520931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-24 20:08:53.520940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.804 qpair failed and we were unable to recover it. 00:29:05.804 [2024-07-24 20:08:53.521464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-24 20:08:53.521491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.804 qpair failed and we were unable to recover it. 00:29:05.804 [2024-07-24 20:08:53.521941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-24 20:08:53.521950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.804 qpair failed and we were unable to recover it. 00:29:05.804 [2024-07-24 20:08:53.522460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-24 20:08:53.522488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.804 qpair failed and we were unable to recover it. 00:29:05.804 [2024-07-24 20:08:53.522901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-24 20:08:53.522910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.804 qpair failed and we were unable to recover it. 00:29:05.804 [2024-07-24 20:08:53.523473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-24 20:08:53.523501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.804 qpair failed and we were unable to recover it. 00:29:05.804 [2024-07-24 20:08:53.523917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-24 20:08:53.523926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.804 qpair failed and we were unable to recover it. 00:29:05.804 [2024-07-24 20:08:53.524131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-24 20:08:53.524140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.804 qpair failed and we were unable to recover it. 00:29:05.804 [2024-07-24 20:08:53.524555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-24 20:08:53.524563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.804 qpair failed and we were unable to recover it. 00:29:05.804 [2024-07-24 20:08:53.524970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-24 20:08:53.524977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.804 qpair failed and we were unable to recover it. 00:29:05.804 [2024-07-24 20:08:53.525386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-24 20:08:53.525393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.804 qpair failed and we were unable to recover it. 00:29:05.804 [2024-07-24 20:08:53.525827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-24 20:08:53.525834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.804 qpair failed and we were unable to recover it. 00:29:05.804 [2024-07-24 20:08:53.526244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-24 20:08:53.526256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.804 qpair failed and we were unable to recover it. 00:29:05.804 [2024-07-24 20:08:53.526453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-24 20:08:53.526463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.804 qpair failed and we were unable to recover it. 00:29:05.804 [2024-07-24 20:08:53.526941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-24 20:08:53.526948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.804 qpair failed and we were unable to recover it. 00:29:05.804 [2024-07-24 20:08:53.527343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-24 20:08:53.527351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.804 qpair failed and we were unable to recover it. 00:29:05.804 [2024-07-24 20:08:53.527739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-24 20:08:53.527746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.804 qpair failed and we were unable to recover it. 00:29:05.804 [2024-07-24 20:08:53.528158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-24 20:08:53.528166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.804 qpair failed and we were unable to recover it. 00:29:05.804 [2024-07-24 20:08:53.528486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-24 20:08:53.528493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.804 qpair failed and we were unable to recover it. 00:29:05.804 [2024-07-24 20:08:53.528913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-24 20:08:53.528920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.804 qpair failed and we were unable to recover it. 00:29:05.804 [2024-07-24 20:08:53.529332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-24 20:08:53.529339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.804 qpair failed and we were unable to recover it. 00:29:05.804 [2024-07-24 20:08:53.529768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.804 [2024-07-24 20:08:53.529775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.804 qpair failed and we were unable to recover it. 00:29:05.805 [2024-07-24 20:08:53.530215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-24 20:08:53.530223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.805 qpair failed and we were unable to recover it. 00:29:05.805 [2024-07-24 20:08:53.530630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-24 20:08:53.530637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.805 qpair failed and we were unable to recover it. 00:29:05.805 [2024-07-24 20:08:53.531035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-24 20:08:53.531042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.805 qpair failed and we were unable to recover it. 00:29:05.805 [2024-07-24 20:08:53.531445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-24 20:08:53.531452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.805 qpair failed and we were unable to recover it. 00:29:05.805 [2024-07-24 20:08:53.531859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-24 20:08:53.531866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.805 qpair failed and we were unable to recover it. 00:29:05.805 [2024-07-24 20:08:53.532287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-24 20:08:53.532294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.805 qpair failed and we were unable to recover it. 00:29:05.805 [2024-07-24 20:08:53.532736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-24 20:08:53.532743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.805 qpair failed and we were unable to recover it. 00:29:05.805 [2024-07-24 20:08:53.533166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-24 20:08:53.533173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.805 qpair failed and we were unable to recover it. 00:29:05.805 [2024-07-24 20:08:53.533589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-24 20:08:53.533596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.805 qpair failed and we were unable to recover it. 00:29:05.805 [2024-07-24 20:08:53.533907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-24 20:08:53.533914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.805 qpair failed and we were unable to recover it. 00:29:05.805 [2024-07-24 20:08:53.534391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-24 20:08:53.534398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.805 qpair failed and we were unable to recover it. 00:29:05.805 [2024-07-24 20:08:53.534732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-24 20:08:53.534738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.805 qpair failed and we were unable to recover it. 00:29:05.805 [2024-07-24 20:08:53.535053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-24 20:08:53.535061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.805 qpair failed and we were unable to recover it. 00:29:05.805 [2024-07-24 20:08:53.535482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-24 20:08:53.535489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.805 qpair failed and we were unable to recover it. 00:29:05.805 [2024-07-24 20:08:53.535890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-24 20:08:53.535898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.805 qpair failed and we were unable to recover it. 00:29:05.805 [2024-07-24 20:08:53.536427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-24 20:08:53.536455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.805 qpair failed and we were unable to recover it. 00:29:05.805 [2024-07-24 20:08:53.536912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-24 20:08:53.536920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.805 qpair failed and we were unable to recover it. 00:29:05.805 [2024-07-24 20:08:53.537423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-24 20:08:53.537450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.805 qpair failed and we were unable to recover it. 00:29:05.805 [2024-07-24 20:08:53.537870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-24 20:08:53.537879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.805 qpair failed and we were unable to recover it. 00:29:05.805 [2024-07-24 20:08:53.538304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-24 20:08:53.538311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.805 qpair failed and we were unable to recover it. 00:29:05.805 [2024-07-24 20:08:53.538727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-24 20:08:53.538733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.805 qpair failed and we were unable to recover it. 00:29:05.805 [2024-07-24 20:08:53.539137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-24 20:08:53.539145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.805 qpair failed and we were unable to recover it. 00:29:05.805 [2024-07-24 20:08:53.539575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-24 20:08:53.539583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.805 qpair failed and we were unable to recover it. 00:29:05.805 [2024-07-24 20:08:53.540008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-24 20:08:53.540016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.805 qpair failed and we were unable to recover it. 00:29:05.805 [2024-07-24 20:08:53.540524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-24 20:08:53.540552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.805 qpair failed and we were unable to recover it. 00:29:05.805 [2024-07-24 20:08:53.540963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-24 20:08:53.540972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.805 qpair failed and we were unable to recover it. 00:29:05.805 [2024-07-24 20:08:53.541490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-24 20:08:53.541517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.805 qpair failed and we were unable to recover it. 00:29:05.805 [2024-07-24 20:08:53.541941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-24 20:08:53.541949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.805 qpair failed and we were unable to recover it. 00:29:05.805 [2024-07-24 20:08:53.542427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-24 20:08:53.542455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.805 qpair failed and we were unable to recover it. 00:29:05.805 [2024-07-24 20:08:53.542887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-24 20:08:53.542897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.805 qpair failed and we were unable to recover it. 00:29:05.805 [2024-07-24 20:08:53.543268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-24 20:08:53.543279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.805 qpair failed and we were unable to recover it. 00:29:05.805 [2024-07-24 20:08:53.543676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-24 20:08:53.543684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.805 qpair failed and we were unable to recover it. 00:29:05.805 [2024-07-24 20:08:53.544097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-24 20:08:53.544104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.805 qpair failed and we were unable to recover it. 00:29:05.805 [2024-07-24 20:08:53.544531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-24 20:08:53.544540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.805 qpair failed and we were unable to recover it. 00:29:05.805 [2024-07-24 20:08:53.544986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-24 20:08:53.544993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.805 qpair failed and we were unable to recover it. 00:29:05.805 [2024-07-24 20:08:53.545359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-24 20:08:53.545367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.805 qpair failed and we were unable to recover it. 00:29:05.805 [2024-07-24 20:08:53.545830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-24 20:08:53.545837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.805 qpair failed and we were unable to recover it. 00:29:05.805 [2024-07-24 20:08:53.546269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.805 [2024-07-24 20:08:53.546277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.806 qpair failed and we were unable to recover it. 00:29:05.806 [2024-07-24 20:08:53.546586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.806 [2024-07-24 20:08:53.546594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.806 qpair failed and we were unable to recover it. 00:29:05.806 [2024-07-24 20:08:53.547011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.806 [2024-07-24 20:08:53.547018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.806 qpair failed and we were unable to recover it. 00:29:05.806 [2024-07-24 20:08:53.547348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.806 [2024-07-24 20:08:53.547356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.806 qpair failed and we were unable to recover it. 00:29:05.806 [2024-07-24 20:08:53.547832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.806 [2024-07-24 20:08:53.547840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.806 qpair failed and we were unable to recover it. 00:29:05.806 [2024-07-24 20:08:53.548265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.806 [2024-07-24 20:08:53.548272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.806 qpair failed and we were unable to recover it. 00:29:05.806 [2024-07-24 20:08:53.548583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.806 [2024-07-24 20:08:53.548590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.806 qpair failed and we were unable to recover it. 00:29:05.806 [2024-07-24 20:08:53.548878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.806 [2024-07-24 20:08:53.548887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.806 qpair failed and we were unable to recover it. 00:29:05.806 [2024-07-24 20:08:53.549206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.806 [2024-07-24 20:08:53.549213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.806 qpair failed and we were unable to recover it. 00:29:05.806 [2024-07-24 20:08:53.549600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.806 [2024-07-24 20:08:53.549607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.806 qpair failed and we were unable to recover it. 00:29:05.806 [2024-07-24 20:08:53.549925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.806 [2024-07-24 20:08:53.549932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.806 qpair failed and we were unable to recover it. 00:29:05.806 [2024-07-24 20:08:53.550380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.806 [2024-07-24 20:08:53.550388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.806 qpair failed and we were unable to recover it. 00:29:05.806 [2024-07-24 20:08:53.550811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.806 [2024-07-24 20:08:53.550819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.806 qpair failed and we were unable to recover it. 00:29:05.806 [2024-07-24 20:08:53.551093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.806 [2024-07-24 20:08:53.551100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.806 qpair failed and we were unable to recover it. 00:29:05.806 [2024-07-24 20:08:53.551408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.806 [2024-07-24 20:08:53.551416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.806 qpair failed and we were unable to recover it. 00:29:05.806 [2024-07-24 20:08:53.551865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.806 [2024-07-24 20:08:53.551872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.806 qpair failed and we were unable to recover it. 00:29:05.806 [2024-07-24 20:08:53.552289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.806 [2024-07-24 20:08:53.552296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.806 qpair failed and we were unable to recover it. 00:29:05.806 [2024-07-24 20:08:53.552746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.806 [2024-07-24 20:08:53.552753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.806 qpair failed and we were unable to recover it. 00:29:05.806 [2024-07-24 20:08:53.553038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.806 [2024-07-24 20:08:53.553045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.806 qpair failed and we were unable to recover it. 00:29:05.806 [2024-07-24 20:08:53.553493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.806 [2024-07-24 20:08:53.553500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.806 qpair failed and we were unable to recover it. 00:29:05.806 [2024-07-24 20:08:53.553935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.806 [2024-07-24 20:08:53.553942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.806 qpair failed and we were unable to recover it. 00:29:05.806 [2024-07-24 20:08:53.554506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.806 [2024-07-24 20:08:53.554534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.806 qpair failed and we were unable to recover it. 00:29:05.806 [2024-07-24 20:08:53.554861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.806 [2024-07-24 20:08:53.554870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.806 qpair failed and we were unable to recover it. 00:29:05.806 [2024-07-24 20:08:53.555164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.806 [2024-07-24 20:08:53.555173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.806 qpair failed and we were unable to recover it. 00:29:05.806 [2024-07-24 20:08:53.555602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.806 [2024-07-24 20:08:53.555610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.806 qpair failed and we were unable to recover it. 00:29:05.806 [2024-07-24 20:08:53.556046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.806 [2024-07-24 20:08:53.556053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.806 qpair failed and we were unable to recover it. 00:29:05.806 [2024-07-24 20:08:53.556568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.806 [2024-07-24 20:08:53.556597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.806 qpair failed and we were unable to recover it. 00:29:05.806 [2024-07-24 20:08:53.557014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.806 [2024-07-24 20:08:53.557024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.806 qpair failed and we were unable to recover it. 00:29:05.806 [2024-07-24 20:08:53.557558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.806 [2024-07-24 20:08:53.557586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.806 qpair failed and we were unable to recover it. 00:29:05.806 [2024-07-24 20:08:53.558014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.806 [2024-07-24 20:08:53.558022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.806 qpair failed and we were unable to recover it. 00:29:05.806 [2024-07-24 20:08:53.558562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.806 [2024-07-24 20:08:53.558589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.806 qpair failed and we were unable to recover it. 00:29:05.806 [2024-07-24 20:08:53.559024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.806 [2024-07-24 20:08:53.559032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.806 qpair failed and we were unable to recover it. 00:29:05.806 [2024-07-24 20:08:53.559562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.806 [2024-07-24 20:08:53.559590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.806 qpair failed and we were unable to recover it. 00:29:05.806 [2024-07-24 20:08:53.559835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.806 [2024-07-24 20:08:53.559849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.806 qpair failed and we were unable to recover it. 00:29:05.806 [2024-07-24 20:08:53.560029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.806 [2024-07-24 20:08:53.560037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.806 qpair failed and we were unable to recover it. 00:29:05.806 [2024-07-24 20:08:53.560382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.806 [2024-07-24 20:08:53.560389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.806 qpair failed and we were unable to recover it. 00:29:05.806 [2024-07-24 20:08:53.560823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.806 [2024-07-24 20:08:53.560829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.806 qpair failed and we were unable to recover it. 00:29:05.806 [2024-07-24 20:08:53.561239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.806 [2024-07-24 20:08:53.561246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.806 qpair failed and we were unable to recover it. 00:29:05.806 [2024-07-24 20:08:53.561676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.807 [2024-07-24 20:08:53.561682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.807 qpair failed and we were unable to recover it. 00:29:05.807 [2024-07-24 20:08:53.561965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.807 [2024-07-24 20:08:53.561973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.807 qpair failed and we were unable to recover it. 00:29:05.807 [2024-07-24 20:08:53.562314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.807 [2024-07-24 20:08:53.562321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.807 qpair failed and we were unable to recover it. 00:29:05.807 [2024-07-24 20:08:53.562756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.807 [2024-07-24 20:08:53.562762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.807 qpair failed and we were unable to recover it. 00:29:05.807 [2024-07-24 20:08:53.563253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.807 [2024-07-24 20:08:53.563259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.807 qpair failed and we were unable to recover it. 00:29:05.807 [2024-07-24 20:08:53.563685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.807 [2024-07-24 20:08:53.563692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.807 qpair failed and we were unable to recover it. 00:29:05.807 [2024-07-24 20:08:53.564097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.807 [2024-07-24 20:08:53.564104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.807 qpair failed and we were unable to recover it. 00:29:05.807 [2024-07-24 20:08:53.564553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.807 [2024-07-24 20:08:53.564560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.807 qpair failed and we were unable to recover it. 00:29:05.807 [2024-07-24 20:08:53.564958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.807 [2024-07-24 20:08:53.564965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.807 qpair failed and we were unable to recover it. 00:29:05.807 [2024-07-24 20:08:53.565414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.807 [2024-07-24 20:08:53.565421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.807 qpair failed and we were unable to recover it. 00:29:05.807 [2024-07-24 20:08:53.565742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.807 [2024-07-24 20:08:53.565749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.807 qpair failed and we were unable to recover it. 00:29:05.807 [2024-07-24 20:08:53.566216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.807 [2024-07-24 20:08:53.566223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.807 qpair failed and we were unable to recover it. 00:29:05.807 [2024-07-24 20:08:53.566525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.807 [2024-07-24 20:08:53.566532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.807 qpair failed and we were unable to recover it. 00:29:05.807 [2024-07-24 20:08:53.566978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.807 [2024-07-24 20:08:53.566985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.807 qpair failed and we were unable to recover it. 00:29:05.807 [2024-07-24 20:08:53.567394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.807 [2024-07-24 20:08:53.567401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.807 qpair failed and we were unable to recover it. 00:29:05.807 [2024-07-24 20:08:53.567820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.807 [2024-07-24 20:08:53.567826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.807 qpair failed and we were unable to recover it. 00:29:05.807 [2024-07-24 20:08:53.568271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.807 [2024-07-24 20:08:53.568278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.807 qpair failed and we were unable to recover it. 00:29:05.807 [2024-07-24 20:08:53.568559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.807 [2024-07-24 20:08:53.568567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.807 qpair failed and we were unable to recover it. 00:29:05.807 [2024-07-24 20:08:53.568891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.807 [2024-07-24 20:08:53.568898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.807 qpair failed and we were unable to recover it. 00:29:05.807 [2024-07-24 20:08:53.569399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.807 [2024-07-24 20:08:53.569406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.807 qpair failed and we were unable to recover it. 00:29:05.807 [2024-07-24 20:08:53.569895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.807 [2024-07-24 20:08:53.569901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.807 qpair failed and we were unable to recover it. 00:29:05.807 [2024-07-24 20:08:53.570308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.807 [2024-07-24 20:08:53.570314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.807 qpair failed and we were unable to recover it. 00:29:05.807 [2024-07-24 20:08:53.570748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.807 [2024-07-24 20:08:53.570755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.807 qpair failed and we were unable to recover it. 00:29:05.807 [2024-07-24 20:08:53.571057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.807 [2024-07-24 20:08:53.571063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.807 qpair failed and we were unable to recover it. 00:29:05.807 [2024-07-24 20:08:53.571165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.807 [2024-07-24 20:08:53.571173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.807 qpair failed and we were unable to recover it. 00:29:05.807 [2024-07-24 20:08:53.571610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.807 [2024-07-24 20:08:53.571617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.807 qpair failed and we were unable to recover it. 00:29:05.807 [2024-07-24 20:08:53.572027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.807 [2024-07-24 20:08:53.572033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.807 qpair failed and we were unable to recover it. 00:29:05.807 [2024-07-24 20:08:53.572467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.807 [2024-07-24 20:08:53.572474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.807 qpair failed and we were unable to recover it. 00:29:05.807 [2024-07-24 20:08:53.572901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.807 [2024-07-24 20:08:53.572907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.807 qpair failed and we were unable to recover it. 00:29:05.807 [2024-07-24 20:08:53.573224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.807 [2024-07-24 20:08:53.573231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.807 qpair failed and we were unable to recover it. 00:29:05.807 [2024-07-24 20:08:53.573554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.807 [2024-07-24 20:08:53.573560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.807 qpair failed and we were unable to recover it. 00:29:05.807 [2024-07-24 20:08:53.573957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.807 [2024-07-24 20:08:53.573964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.807 qpair failed and we were unable to recover it. 00:29:05.807 [2024-07-24 20:08:53.574361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.807 [2024-07-24 20:08:53.574368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.807 qpair failed and we were unable to recover it. 00:29:05.807 [2024-07-24 20:08:53.574814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.807 [2024-07-24 20:08:53.574821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.807 qpair failed and we were unable to recover it. 00:29:05.807 [2024-07-24 20:08:53.575227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.807 [2024-07-24 20:08:53.575234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.807 qpair failed and we were unable to recover it. 00:29:05.807 [2024-07-24 20:08:53.575698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.807 [2024-07-24 20:08:53.575706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.807 qpair failed and we were unable to recover it. 00:29:05.807 [2024-07-24 20:08:53.576117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.807 [2024-07-24 20:08:53.576123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.807 qpair failed and we were unable to recover it. 00:29:05.807 [2024-07-24 20:08:53.576601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.807 [2024-07-24 20:08:53.576608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.807 qpair failed and we were unable to recover it. 00:29:05.808 [2024-07-24 20:08:53.577057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.808 [2024-07-24 20:08:53.577064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.808 qpair failed and we were unable to recover it. 00:29:05.808 [2024-07-24 20:08:53.577387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.808 [2024-07-24 20:08:53.577394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.808 qpair failed and we were unable to recover it. 00:29:05.808 [2024-07-24 20:08:53.577821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.808 [2024-07-24 20:08:53.577828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.808 qpair failed and we were unable to recover it. 00:29:05.808 [2024-07-24 20:08:53.578252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.808 [2024-07-24 20:08:53.578259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.808 qpair failed and we were unable to recover it. 00:29:05.808 [2024-07-24 20:08:53.578572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.808 [2024-07-24 20:08:53.578579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.808 qpair failed and we were unable to recover it. 00:29:05.808 [2024-07-24 20:08:53.579003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.808 [2024-07-24 20:08:53.579009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.808 qpair failed and we were unable to recover it. 00:29:05.808 [2024-07-24 20:08:53.579325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.808 [2024-07-24 20:08:53.579332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.808 qpair failed and we were unable to recover it. 00:29:05.808 [2024-07-24 20:08:53.579537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.808 [2024-07-24 20:08:53.579545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.808 qpair failed and we were unable to recover it. 00:29:05.808 [2024-07-24 20:08:53.579936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.808 [2024-07-24 20:08:53.579942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.808 qpair failed and we were unable to recover it. 00:29:05.808 [2024-07-24 20:08:53.580289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.808 [2024-07-24 20:08:53.580296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.808 qpair failed and we were unable to recover it. 00:29:05.808 [2024-07-24 20:08:53.580638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.808 [2024-07-24 20:08:53.580644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.808 qpair failed and we were unable to recover it. 00:29:05.808 [2024-07-24 20:08:53.581050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.808 [2024-07-24 20:08:53.581056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.808 qpair failed and we were unable to recover it. 00:29:05.808 [2024-07-24 20:08:53.581515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.808 [2024-07-24 20:08:53.581522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.808 qpair failed and we were unable to recover it. 00:29:05.808 [2024-07-24 20:08:53.581954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.808 [2024-07-24 20:08:53.581961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.808 qpair failed and we were unable to recover it. 00:29:05.808 [2024-07-24 20:08:53.582308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.808 [2024-07-24 20:08:53.582315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.808 qpair failed and we were unable to recover it. 00:29:05.808 [2024-07-24 20:08:53.582735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.808 [2024-07-24 20:08:53.582742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.808 qpair failed and we were unable to recover it. 00:29:05.808 [2024-07-24 20:08:53.583156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.808 [2024-07-24 20:08:53.583163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.808 qpair failed and we were unable to recover it. 00:29:05.808 [2024-07-24 20:08:53.583569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.808 [2024-07-24 20:08:53.583576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.808 qpair failed and we were unable to recover it. 00:29:05.808 [2024-07-24 20:08:53.583978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.808 [2024-07-24 20:08:53.583984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.808 qpair failed and we were unable to recover it. 00:29:05.808 [2024-07-24 20:08:53.584535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.808 [2024-07-24 20:08:53.584563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.808 qpair failed and we were unable to recover it. 00:29:05.808 [2024-07-24 20:08:53.585059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.808 [2024-07-24 20:08:53.585067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.808 qpair failed and we were unable to recover it. 00:29:05.808 [2024-07-24 20:08:53.585572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.808 [2024-07-24 20:08:53.585599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.808 qpair failed and we were unable to recover it. 00:29:05.808 [2024-07-24 20:08:53.586018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.808 [2024-07-24 20:08:53.586027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.808 qpair failed and we were unable to recover it. 00:29:05.808 [2024-07-24 20:08:53.586405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.808 [2024-07-24 20:08:53.586432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.808 qpair failed and we were unable to recover it. 00:29:05.808 [2024-07-24 20:08:53.586824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.808 [2024-07-24 20:08:53.586832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.808 qpair failed and we were unable to recover it. 00:29:05.808 [2024-07-24 20:08:53.587382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.808 [2024-07-24 20:08:53.587410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.808 qpair failed and we were unable to recover it. 00:29:05.808 [2024-07-24 20:08:53.587731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.808 [2024-07-24 20:08:53.587739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.808 qpair failed and we were unable to recover it. 00:29:05.808 [2024-07-24 20:08:53.588012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.808 [2024-07-24 20:08:53.588019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.808 qpair failed and we were unable to recover it. 00:29:05.808 [2024-07-24 20:08:53.588455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.808 [2024-07-24 20:08:53.588462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.808 qpair failed and we were unable to recover it. 00:29:05.808 [2024-07-24 20:08:53.588867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.808 [2024-07-24 20:08:53.588874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.808 qpair failed and we were unable to recover it. 00:29:05.808 [2024-07-24 20:08:53.589284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.808 [2024-07-24 20:08:53.589291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.808 qpair failed and we were unable to recover it. 00:29:05.808 [2024-07-24 20:08:53.589730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.808 [2024-07-24 20:08:53.589736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.808 qpair failed and we were unable to recover it. 00:29:05.808 [2024-07-24 20:08:53.590054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.808 [2024-07-24 20:08:53.590060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.808 qpair failed and we were unable to recover it. 00:29:05.808 [2024-07-24 20:08:53.590481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.808 [2024-07-24 20:08:53.590487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.808 qpair failed and we were unable to recover it. 00:29:05.808 [2024-07-24 20:08:53.590800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.808 [2024-07-24 20:08:53.590807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.808 qpair failed and we were unable to recover it. 00:29:05.808 [2024-07-24 20:08:53.591227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.808 [2024-07-24 20:08:53.591234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.808 qpair failed and we were unable to recover it. 00:29:05.808 [2024-07-24 20:08:53.591572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.808 [2024-07-24 20:08:53.591579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.808 qpair failed and we were unable to recover it. 00:29:05.808 [2024-07-24 20:08:53.592001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.809 [2024-07-24 20:08:53.592010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.809 qpair failed and we were unable to recover it. 00:29:05.809 [2024-07-24 20:08:53.592423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.809 [2024-07-24 20:08:53.592430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.809 qpair failed and we were unable to recover it. 00:29:05.809 [2024-07-24 20:08:53.592831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.809 [2024-07-24 20:08:53.592839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.809 qpair failed and we were unable to recover it. 00:29:05.809 [2024-07-24 20:08:53.593280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.809 [2024-07-24 20:08:53.593287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.809 qpair failed and we were unable to recover it. 00:29:05.809 [2024-07-24 20:08:53.593700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.809 [2024-07-24 20:08:53.593706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.809 qpair failed and we were unable to recover it. 00:29:05.809 [2024-07-24 20:08:53.594067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.809 [2024-07-24 20:08:53.594082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.809 qpair failed and we were unable to recover it. 00:29:05.809 [2024-07-24 20:08:53.594492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.809 [2024-07-24 20:08:53.594499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.809 qpair failed and we were unable to recover it. 00:29:05.809 [2024-07-24 20:08:53.594906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.809 [2024-07-24 20:08:53.594912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.809 qpair failed and we were unable to recover it. 00:29:05.809 [2024-07-24 20:08:53.595421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.809 [2024-07-24 20:08:53.595449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.809 qpair failed and we were unable to recover it. 00:29:05.809 [2024-07-24 20:08:53.595864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.809 [2024-07-24 20:08:53.595873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.809 qpair failed and we were unable to recover it. 00:29:05.809 [2024-07-24 20:08:53.596304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.809 [2024-07-24 20:08:53.596311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.809 qpair failed and we were unable to recover it. 00:29:05.809 [2024-07-24 20:08:53.596728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.809 [2024-07-24 20:08:53.596735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.809 qpair failed and we were unable to recover it. 00:29:05.809 [2024-07-24 20:08:53.597164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.809 [2024-07-24 20:08:53.597171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.809 qpair failed and we were unable to recover it. 00:29:05.809 [2024-07-24 20:08:53.597594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.809 [2024-07-24 20:08:53.597601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.809 qpair failed and we were unable to recover it. 00:29:05.809 [2024-07-24 20:08:53.597999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.809 [2024-07-24 20:08:53.598006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.809 qpair failed and we were unable to recover it. 00:29:05.809 [2024-07-24 20:08:53.598529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.809 [2024-07-24 20:08:53.598557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.809 qpair failed and we were unable to recover it. 00:29:05.809 [2024-07-24 20:08:53.598896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.809 [2024-07-24 20:08:53.598904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.809 qpair failed and we were unable to recover it. 00:29:05.809 [2024-07-24 20:08:53.599339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.809 [2024-07-24 20:08:53.599346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.809 qpair failed and we were unable to recover it. 00:29:05.809 [2024-07-24 20:08:53.599825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.809 [2024-07-24 20:08:53.599832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.809 qpair failed and we were unable to recover it. 00:29:05.809 [2024-07-24 20:08:53.600242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.809 [2024-07-24 20:08:53.600250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.809 qpair failed and we were unable to recover it. 00:29:05.809 [2024-07-24 20:08:53.600672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.809 [2024-07-24 20:08:53.600678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.809 qpair failed and we were unable to recover it. 00:29:05.809 [2024-07-24 20:08:53.601079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.809 [2024-07-24 20:08:53.601087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.809 qpair failed and we were unable to recover it. 00:29:05.809 [2024-07-24 20:08:53.601531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.809 [2024-07-24 20:08:53.601538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.809 qpair failed and we were unable to recover it. 00:29:05.809 [2024-07-24 20:08:53.601976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.809 [2024-07-24 20:08:53.601984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.809 qpair failed and we were unable to recover it. 00:29:05.809 [2024-07-24 20:08:53.602305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.809 [2024-07-24 20:08:53.602312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.809 qpair failed and we were unable to recover it. 00:29:05.809 [2024-07-24 20:08:53.602537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.809 [2024-07-24 20:08:53.602547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.809 qpair failed and we were unable to recover it. 00:29:05.809 [2024-07-24 20:08:53.603041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.809 [2024-07-24 20:08:53.603048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.809 qpair failed and we were unable to recover it. 00:29:05.809 [2024-07-24 20:08:53.603501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.809 [2024-07-24 20:08:53.603508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.809 qpair failed and we were unable to recover it. 00:29:05.809 [2024-07-24 20:08:53.603913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.809 [2024-07-24 20:08:53.603920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.809 qpair failed and we were unable to recover it. 00:29:05.809 [2024-07-24 20:08:53.604321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.809 [2024-07-24 20:08:53.604328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.809 qpair failed and we were unable to recover it. 00:29:05.809 [2024-07-24 20:08:53.604745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.809 [2024-07-24 20:08:53.604751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.809 qpair failed and we were unable to recover it. 00:29:05.809 [2024-07-24 20:08:53.604961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.809 [2024-07-24 20:08:53.604967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.809 qpair failed and we were unable to recover it. 00:29:05.809 [2024-07-24 20:08:53.605405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.809 [2024-07-24 20:08:53.605411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.809 qpair failed and we were unable to recover it. 00:29:05.810 [2024-07-24 20:08:53.605892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.810 [2024-07-24 20:08:53.605899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.810 qpair failed and we were unable to recover it. 00:29:05.810 [2024-07-24 20:08:53.606297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.810 [2024-07-24 20:08:53.606305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.810 qpair failed and we were unable to recover it. 00:29:05.810 [2024-07-24 20:08:53.606605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.810 [2024-07-24 20:08:53.606612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.810 qpair failed and we were unable to recover it. 00:29:05.810 [2024-07-24 20:08:53.607029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.810 [2024-07-24 20:08:53.607036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.810 qpair failed and we were unable to recover it. 00:29:05.810 [2024-07-24 20:08:53.607373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.810 [2024-07-24 20:08:53.607380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.810 qpair failed and we were unable to recover it. 00:29:05.810 [2024-07-24 20:08:53.607806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.810 [2024-07-24 20:08:53.607813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.810 qpair failed and we were unable to recover it. 00:29:05.810 [2024-07-24 20:08:53.608268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.810 [2024-07-24 20:08:53.608275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.810 qpair failed and we were unable to recover it. 00:29:05.810 [2024-07-24 20:08:53.608721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.810 [2024-07-24 20:08:53.608731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.810 qpair failed and we were unable to recover it. 00:29:05.810 [2024-07-24 20:08:53.609150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.810 [2024-07-24 20:08:53.609157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.810 qpair failed and we were unable to recover it. 00:29:05.810 [2024-07-24 20:08:53.609632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.810 [2024-07-24 20:08:53.609640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.810 qpair failed and we were unable to recover it. 00:29:05.810 [2024-07-24 20:08:53.610084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.810 [2024-07-24 20:08:53.610090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.810 qpair failed and we were unable to recover it. 00:29:05.810 [2024-07-24 20:08:53.610581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.810 [2024-07-24 20:08:53.610588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.810 qpair failed and we were unable to recover it. 00:29:05.810 [2024-07-24 20:08:53.610991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.810 [2024-07-24 20:08:53.610998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.810 qpair failed and we were unable to recover it. 00:29:05.810 [2024-07-24 20:08:53.611509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.810 [2024-07-24 20:08:53.611536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.810 qpair failed and we were unable to recover it. 00:29:05.810 [2024-07-24 20:08:53.611954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.810 [2024-07-24 20:08:53.611962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.810 qpair failed and we were unable to recover it. 00:29:05.810 [2024-07-24 20:08:53.612480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.810 [2024-07-24 20:08:53.612507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.810 qpair failed and we were unable to recover it. 00:29:05.810 [2024-07-24 20:08:53.612846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.810 [2024-07-24 20:08:53.612854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.810 qpair failed and we were unable to recover it. 00:29:05.810 [2024-07-24 20:08:53.613281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.810 [2024-07-24 20:08:53.613288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.810 qpair failed and we were unable to recover it. 00:29:05.810 [2024-07-24 20:08:53.613596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.810 [2024-07-24 20:08:53.613604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.810 qpair failed and we were unable to recover it. 00:29:05.810 [2024-07-24 20:08:53.614109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.810 [2024-07-24 20:08:53.614115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.810 qpair failed and we were unable to recover it. 00:29:05.810 [2024-07-24 20:08:53.614432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.810 [2024-07-24 20:08:53.614440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.810 qpair failed and we were unable to recover it. 00:29:05.810 [2024-07-24 20:08:53.614861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.810 [2024-07-24 20:08:53.614868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.810 qpair failed and we were unable to recover it. 00:29:05.810 [2024-07-24 20:08:53.615269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.810 [2024-07-24 20:08:53.615276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.810 qpair failed and we were unable to recover it. 00:29:05.810 [2024-07-24 20:08:53.615479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.810 [2024-07-24 20:08:53.615489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.810 qpair failed and we were unable to recover it. 00:29:05.810 [2024-07-24 20:08:53.615922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.810 [2024-07-24 20:08:53.615928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.810 qpair failed and we were unable to recover it. 00:29:05.810 [2024-07-24 20:08:53.616329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.810 [2024-07-24 20:08:53.616336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.810 qpair failed and we were unable to recover it. 00:29:05.810 [2024-07-24 20:08:53.616525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.810 [2024-07-24 20:08:53.616532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.810 qpair failed and we were unable to recover it. 00:29:05.810 [2024-07-24 20:08:53.616960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.810 [2024-07-24 20:08:53.616967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.810 qpair failed and we were unable to recover it. 00:29:05.810 [2024-07-24 20:08:53.617180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.810 [2024-07-24 20:08:53.617188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.810 qpair failed and we were unable to recover it. 00:29:05.810 [2024-07-24 20:08:53.617597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.810 [2024-07-24 20:08:53.617604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.810 qpair failed and we were unable to recover it. 00:29:05.810 [2024-07-24 20:08:53.618006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.810 [2024-07-24 20:08:53.618013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.810 qpair failed and we were unable to recover it. 00:29:05.810 [2024-07-24 20:08:53.618423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.810 [2024-07-24 20:08:53.618430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.810 qpair failed and we were unable to recover it. 00:29:05.810 [2024-07-24 20:08:53.618831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.810 [2024-07-24 20:08:53.618837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.810 qpair failed and we were unable to recover it. 00:29:05.810 [2024-07-24 20:08:53.619257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.810 [2024-07-24 20:08:53.619264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.810 qpair failed and we were unable to recover it. 00:29:05.810 [2024-07-24 20:08:53.619662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.810 [2024-07-24 20:08:53.619669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.810 qpair failed and we were unable to recover it. 00:29:05.810 [2024-07-24 20:08:53.620087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.810 [2024-07-24 20:08:53.620094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.810 qpair failed and we were unable to recover it. 00:29:05.810 [2024-07-24 20:08:53.620494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.810 [2024-07-24 20:08:53.620501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.810 qpair failed and we were unable to recover it. 00:29:05.810 [2024-07-24 20:08:53.620941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.810 [2024-07-24 20:08:53.620947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.810 qpair failed and we were unable to recover it. 00:29:05.811 [2024-07-24 20:08:53.621351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.811 [2024-07-24 20:08:53.621358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.811 qpair failed and we were unable to recover it. 00:29:05.811 [2024-07-24 20:08:53.621766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.811 [2024-07-24 20:08:53.621772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.811 qpair failed and we were unable to recover it. 00:29:05.811 [2024-07-24 20:08:53.622171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.811 [2024-07-24 20:08:53.622177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.811 qpair failed and we were unable to recover it. 00:29:05.811 [2024-07-24 20:08:53.622581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.811 [2024-07-24 20:08:53.622588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.811 qpair failed and we were unable to recover it. 00:29:05.811 [2024-07-24 20:08:53.623029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.811 [2024-07-24 20:08:53.623036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.811 qpair failed and we were unable to recover it. 00:29:05.811 [2024-07-24 20:08:53.623456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.811 [2024-07-24 20:08:53.623485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.811 qpair failed and we were unable to recover it. 00:29:05.811 [2024-07-24 20:08:53.623931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.811 [2024-07-24 20:08:53.623940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.811 qpair failed and we were unable to recover it. 00:29:05.811 [2024-07-24 20:08:53.624502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.811 [2024-07-24 20:08:53.624529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.811 qpair failed and we were unable to recover it. 00:29:05.811 [2024-07-24 20:08:53.624975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.811 [2024-07-24 20:08:53.624984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.811 qpair failed and we were unable to recover it. 00:29:05.811 [2024-07-24 20:08:53.625503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.811 [2024-07-24 20:08:53.625533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.811 qpair failed and we were unable to recover it. 00:29:05.811 [2024-07-24 20:08:53.625946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.811 [2024-07-24 20:08:53.625954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.811 qpair failed and we were unable to recover it. 00:29:05.811 [2024-07-24 20:08:53.626491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.811 [2024-07-24 20:08:53.626519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.811 qpair failed and we were unable to recover it. 00:29:05.811 [2024-07-24 20:08:53.626932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.811 [2024-07-24 20:08:53.626940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.811 qpair failed and we were unable to recover it. 00:29:05.811 [2024-07-24 20:08:53.627444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.811 [2024-07-24 20:08:53.627471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.811 qpair failed and we were unable to recover it. 00:29:05.811 [2024-07-24 20:08:53.627805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.811 [2024-07-24 20:08:53.627813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.811 qpair failed and we were unable to recover it. 00:29:05.811 [2024-07-24 20:08:53.628259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.811 [2024-07-24 20:08:53.628266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.811 qpair failed and we were unable to recover it. 00:29:05.811 [2024-07-24 20:08:53.628666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.811 [2024-07-24 20:08:53.628673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.811 qpair failed and we were unable to recover it. 00:29:05.811 [2024-07-24 20:08:53.629098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.811 [2024-07-24 20:08:53.629104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.811 qpair failed and we were unable to recover it. 00:29:05.811 [2024-07-24 20:08:53.629541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.811 [2024-07-24 20:08:53.629548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.811 qpair failed and we were unable to recover it. 00:29:05.811 [2024-07-24 20:08:53.629943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.811 [2024-07-24 20:08:53.629950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.811 qpair failed and we were unable to recover it. 00:29:05.811 [2024-07-24 20:08:53.630356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.811 [2024-07-24 20:08:53.630363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.811 qpair failed and we were unable to recover it. 00:29:05.811 [2024-07-24 20:08:53.630774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.811 [2024-07-24 20:08:53.630781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.811 qpair failed and we were unable to recover it. 00:29:05.811 [2024-07-24 20:08:53.631206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.811 [2024-07-24 20:08:53.631213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.811 qpair failed and we were unable to recover it. 00:29:05.811 [2024-07-24 20:08:53.631644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.811 [2024-07-24 20:08:53.631651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.811 qpair failed and we were unable to recover it. 00:29:05.811 [2024-07-24 20:08:53.632089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.811 [2024-07-24 20:08:53.632096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.811 qpair failed and we were unable to recover it. 00:29:05.811 [2024-07-24 20:08:53.632539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.811 [2024-07-24 20:08:53.632547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.811 qpair failed and we were unable to recover it. 00:29:05.811 [2024-07-24 20:08:53.632879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.811 [2024-07-24 20:08:53.632886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.811 qpair failed and we were unable to recover it. 00:29:05.811 [2024-07-24 20:08:53.633443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.811 [2024-07-24 20:08:53.633470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.811 qpair failed and we were unable to recover it. 00:29:05.811 [2024-07-24 20:08:53.633940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.811 [2024-07-24 20:08:53.633948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.811 qpair failed and we were unable to recover it. 00:29:05.811 [2024-07-24 20:08:53.634373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.811 [2024-07-24 20:08:53.634381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.811 qpair failed and we were unable to recover it. 00:29:05.811 [2024-07-24 20:08:53.634813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.811 [2024-07-24 20:08:53.634819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.811 qpair failed and we were unable to recover it. 00:29:05.811 [2024-07-24 20:08:53.635226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.811 [2024-07-24 20:08:53.635233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.811 qpair failed and we were unable to recover it. 00:29:05.811 [2024-07-24 20:08:53.635667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.811 [2024-07-24 20:08:53.635674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.811 qpair failed and we were unable to recover it. 00:29:05.811 [2024-07-24 20:08:53.636101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.811 [2024-07-24 20:08:53.636107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.811 qpair failed and we were unable to recover it. 00:29:05.811 [2024-07-24 20:08:53.636513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.811 [2024-07-24 20:08:53.636520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.811 qpair failed and we were unable to recover it. 00:29:05.811 [2024-07-24 20:08:53.636921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.811 [2024-07-24 20:08:53.636927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.811 qpair failed and we were unable to recover it. 00:29:05.811 [2024-07-24 20:08:53.637359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.811 [2024-07-24 20:08:53.637367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.811 qpair failed and we were unable to recover it. 00:29:05.811 [2024-07-24 20:08:53.637771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.811 [2024-07-24 20:08:53.637777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.812 qpair failed and we were unable to recover it. 00:29:05.812 [2024-07-24 20:08:53.638224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.812 [2024-07-24 20:08:53.638230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.812 qpair failed and we were unable to recover it. 00:29:05.812 [2024-07-24 20:08:53.638625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.812 [2024-07-24 20:08:53.638632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.812 qpair failed and we were unable to recover it. 00:29:05.812 [2024-07-24 20:08:53.638955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.812 [2024-07-24 20:08:53.638962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.812 qpair failed and we were unable to recover it. 00:29:05.812 [2024-07-24 20:08:53.639267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.812 [2024-07-24 20:08:53.639275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.812 qpair failed and we were unable to recover it. 00:29:05.812 [2024-07-24 20:08:53.639724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.812 [2024-07-24 20:08:53.639731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.812 qpair failed and we were unable to recover it. 00:29:05.812 [2024-07-24 20:08:53.640046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.812 [2024-07-24 20:08:53.640053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.812 qpair failed and we were unable to recover it. 00:29:05.812 [2024-07-24 20:08:53.640486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.812 [2024-07-24 20:08:53.640493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.812 qpair failed and we were unable to recover it. 00:29:05.812 [2024-07-24 20:08:53.640910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.812 [2024-07-24 20:08:53.640917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.812 qpair failed and we were unable to recover it. 00:29:05.812 [2024-07-24 20:08:53.641340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.812 [2024-07-24 20:08:53.641347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.812 qpair failed and we were unable to recover it. 00:29:05.812 [2024-07-24 20:08:53.641732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.812 [2024-07-24 20:08:53.641739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.812 qpair failed and we were unable to recover it. 00:29:05.812 [2024-07-24 20:08:53.642154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.812 [2024-07-24 20:08:53.642160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.812 qpair failed and we were unable to recover it. 00:29:05.812 [2024-07-24 20:08:53.642579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.812 [2024-07-24 20:08:53.642586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.812 qpair failed and we were unable to recover it. 00:29:05.812 [2024-07-24 20:08:53.642988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.812 [2024-07-24 20:08:53.642994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.812 qpair failed and we were unable to recover it. 00:29:05.812 [2024-07-24 20:08:53.643494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.812 [2024-07-24 20:08:53.643522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.812 qpair failed and we were unable to recover it. 00:29:05.812 [2024-07-24 20:08:53.643938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.812 [2024-07-24 20:08:53.643946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.812 qpair failed and we were unable to recover it. 00:29:05.812 [2024-07-24 20:08:53.644470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.812 [2024-07-24 20:08:53.644498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.812 qpair failed and we were unable to recover it. 00:29:05.812 [2024-07-24 20:08:53.644914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.812 [2024-07-24 20:08:53.644922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.812 qpair failed and we were unable to recover it. 00:29:05.812 [2024-07-24 20:08:53.645423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.812 [2024-07-24 20:08:53.645451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.812 qpair failed and we were unable to recover it. 00:29:05.812 [2024-07-24 20:08:53.645895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.812 [2024-07-24 20:08:53.645903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.812 qpair failed and we were unable to recover it. 00:29:05.812 [2024-07-24 20:08:53.646301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.812 [2024-07-24 20:08:53.646309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.812 qpair failed and we were unable to recover it. 00:29:05.812 [2024-07-24 20:08:53.646628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.812 [2024-07-24 20:08:53.646634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.812 qpair failed and we were unable to recover it. 00:29:05.812 [2024-07-24 20:08:53.647073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.812 [2024-07-24 20:08:53.647080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.812 qpair failed and we were unable to recover it. 00:29:05.812 [2024-07-24 20:08:53.647522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.812 [2024-07-24 20:08:53.647529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.812 qpair failed and we were unable to recover it. 00:29:05.812 [2024-07-24 20:08:53.647982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.812 [2024-07-24 20:08:53.647989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.812 qpair failed and we were unable to recover it. 00:29:05.812 [2024-07-24 20:08:53.648519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.812 [2024-07-24 20:08:53.648546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.812 qpair failed and we were unable to recover it. 00:29:05.812 [2024-07-24 20:08:53.648962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.812 [2024-07-24 20:08:53.648970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.812 qpair failed and we were unable to recover it. 00:29:05.812 [2024-07-24 20:08:53.649407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.812 [2024-07-24 20:08:53.649435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.812 qpair failed and we were unable to recover it. 00:29:05.812 [2024-07-24 20:08:53.649851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.812 [2024-07-24 20:08:53.649860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.812 qpair failed and we were unable to recover it. 00:29:05.812 [2024-07-24 20:08:53.650264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.812 [2024-07-24 20:08:53.650272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.812 qpair failed and we were unable to recover it. 00:29:05.812 [2024-07-24 20:08:53.650683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.812 [2024-07-24 20:08:53.650690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.812 qpair failed and we were unable to recover it. 00:29:05.812 [2024-07-24 20:08:53.651096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.812 [2024-07-24 20:08:53.651103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.812 qpair failed and we were unable to recover it. 00:29:05.812 [2024-07-24 20:08:53.651550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.812 [2024-07-24 20:08:53.651558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.812 qpair failed and we were unable to recover it. 00:29:05.812 [2024-07-24 20:08:53.651882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.812 [2024-07-24 20:08:53.651889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.812 qpair failed and we were unable to recover it. 00:29:05.812 [2024-07-24 20:08:53.652313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.812 [2024-07-24 20:08:53.652320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.812 qpair failed and we were unable to recover it. 00:29:05.812 [2024-07-24 20:08:53.652818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.812 [2024-07-24 20:08:53.652824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.812 qpair failed and we were unable to recover it. 00:29:05.812 [2024-07-24 20:08:53.653228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.812 [2024-07-24 20:08:53.653235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.812 qpair failed and we were unable to recover it. 00:29:05.812 [2024-07-24 20:08:53.653651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.812 [2024-07-24 20:08:53.653657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.812 qpair failed and we were unable to recover it. 00:29:05.812 [2024-07-24 20:08:53.654071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.813 [2024-07-24 20:08:53.654077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.813 qpair failed and we were unable to recover it. 00:29:05.813 [2024-07-24 20:08:53.654483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.813 [2024-07-24 20:08:53.654493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.813 qpair failed and we were unable to recover it. 00:29:05.813 [2024-07-24 20:08:53.654947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.813 [2024-07-24 20:08:53.654954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.813 qpair failed and we were unable to recover it. 00:29:05.813 [2024-07-24 20:08:53.655396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.813 [2024-07-24 20:08:53.655423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.813 qpair failed and we were unable to recover it. 00:29:05.813 [2024-07-24 20:08:53.655851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.813 [2024-07-24 20:08:53.655859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.813 qpair failed and we were unable to recover it. 00:29:05.813 [2024-07-24 20:08:53.656148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.813 [2024-07-24 20:08:53.656156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.813 qpair failed and we were unable to recover it. 00:29:05.813 [2024-07-24 20:08:53.656585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.813 [2024-07-24 20:08:53.656593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.813 qpair failed and we were unable to recover it. 00:29:05.813 [2024-07-24 20:08:53.656995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.813 [2024-07-24 20:08:53.657001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.813 qpair failed and we were unable to recover it. 00:29:05.813 [2024-07-24 20:08:53.657578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.813 [2024-07-24 20:08:53.657606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.813 qpair failed and we were unable to recover it. 00:29:05.813 [2024-07-24 20:08:53.658086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.813 [2024-07-24 20:08:53.658095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.813 qpair failed and we were unable to recover it. 00:29:05.813 [2024-07-24 20:08:53.658514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.813 [2024-07-24 20:08:53.658522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.813 qpair failed and we were unable to recover it. 00:29:05.813 [2024-07-24 20:08:53.658922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.813 [2024-07-24 20:08:53.658928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.813 qpair failed and we were unable to recover it. 00:29:05.813 [2024-07-24 20:08:53.659425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.813 [2024-07-24 20:08:53.659452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.813 qpair failed and we were unable to recover it. 00:29:05.813 [2024-07-24 20:08:53.659898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.813 [2024-07-24 20:08:53.659907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.813 qpair failed and we were unable to recover it. 00:29:05.813 [2024-07-24 20:08:53.660337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.813 [2024-07-24 20:08:53.660344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.813 qpair failed and we were unable to recover it. 00:29:05.813 [2024-07-24 20:08:53.660778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.813 [2024-07-24 20:08:53.660785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.813 qpair failed and we were unable to recover it. 00:29:05.813 [2024-07-24 20:08:53.661213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.813 [2024-07-24 20:08:53.661220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.813 qpair failed and we were unable to recover it. 00:29:05.813 [2024-07-24 20:08:53.661618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.813 [2024-07-24 20:08:53.661625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.813 qpair failed and we were unable to recover it. 00:29:05.813 [2024-07-24 20:08:53.661837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.813 [2024-07-24 20:08:53.661847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.813 qpair failed and we were unable to recover it. 00:29:05.813 [2024-07-24 20:08:53.662259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.813 [2024-07-24 20:08:53.662266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.813 qpair failed and we were unable to recover it. 00:29:05.813 [2024-07-24 20:08:53.662671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.813 [2024-07-24 20:08:53.662678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.813 qpair failed and we were unable to recover it. 00:29:05.813 [2024-07-24 20:08:53.663135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.813 [2024-07-24 20:08:53.663142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.813 qpair failed and we were unable to recover it. 00:29:05.813 [2024-07-24 20:08:53.663623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.813 [2024-07-24 20:08:53.663631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.813 qpair failed and we were unable to recover it. 00:29:05.813 [2024-07-24 20:08:53.664059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.813 [2024-07-24 20:08:53.664066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.813 qpair failed and we were unable to recover it. 00:29:05.813 [2024-07-24 20:08:53.664394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.813 [2024-07-24 20:08:53.664401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.813 qpair failed and we were unable to recover it. 00:29:05.813 [2024-07-24 20:08:53.664849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.813 [2024-07-24 20:08:53.664856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.813 qpair failed and we were unable to recover it. 00:29:05.813 [2024-07-24 20:08:53.665305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.813 [2024-07-24 20:08:53.665311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.813 qpair failed and we were unable to recover it. 00:29:05.813 [2024-07-24 20:08:53.665807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.813 [2024-07-24 20:08:53.665814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.813 qpair failed and we were unable to recover it. 00:29:05.813 [2024-07-24 20:08:53.666209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.813 [2024-07-24 20:08:53.666216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.813 qpair failed and we were unable to recover it. 00:29:05.813 [2024-07-24 20:08:53.666643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.813 [2024-07-24 20:08:53.666650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.813 qpair failed and we were unable to recover it. 00:29:05.813 [2024-07-24 20:08:53.667092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.813 [2024-07-24 20:08:53.667099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.813 qpair failed and we were unable to recover it. 00:29:05.813 [2024-07-24 20:08:53.667624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.813 [2024-07-24 20:08:53.667630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.813 qpair failed and we were unable to recover it. 00:29:05.813 [2024-07-24 20:08:53.668033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.813 [2024-07-24 20:08:53.668040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.813 qpair failed and we were unable to recover it. 00:29:05.814 [2024-07-24 20:08:53.668489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.814 [2024-07-24 20:08:53.668495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.814 qpair failed and we were unable to recover it. 00:29:05.814 [2024-07-24 20:08:53.668901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.814 [2024-07-24 20:08:53.668907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.814 qpair failed and we were unable to recover it. 00:29:05.814 [2024-07-24 20:08:53.669439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.814 [2024-07-24 20:08:53.669467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.814 qpair failed and we were unable to recover it. 00:29:05.814 [2024-07-24 20:08:53.669884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.814 [2024-07-24 20:08:53.669893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.814 qpair failed and we were unable to recover it. 00:29:05.814 [2024-07-24 20:08:53.670216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.814 [2024-07-24 20:08:53.670223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.814 qpair failed and we were unable to recover it. 00:29:05.814 [2024-07-24 20:08:53.670624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.814 [2024-07-24 20:08:53.670631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.814 qpair failed and we were unable to recover it. 00:29:05.814 [2024-07-24 20:08:53.671063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.814 [2024-07-24 20:08:53.671070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.814 qpair failed and we were unable to recover it. 00:29:05.814 [2024-07-24 20:08:53.671357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.814 [2024-07-24 20:08:53.671365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.814 qpair failed and we were unable to recover it. 00:29:05.814 [2024-07-24 20:08:53.671691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.814 [2024-07-24 20:08:53.671701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.814 qpair failed and we were unable to recover it. 00:29:05.814 [2024-07-24 20:08:53.672100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.814 [2024-07-24 20:08:53.672107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.814 qpair failed and we were unable to recover it. 00:29:05.814 [2024-07-24 20:08:53.672589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.814 [2024-07-24 20:08:53.672597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.814 qpair failed and we were unable to recover it. 00:29:05.814 [2024-07-24 20:08:53.673018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.814 [2024-07-24 20:08:53.673026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.814 qpair failed and we were unable to recover it. 00:29:05.814 [2024-07-24 20:08:53.673442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.814 [2024-07-24 20:08:53.673449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.814 qpair failed and we were unable to recover it. 00:29:05.814 [2024-07-24 20:08:53.673887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.814 [2024-07-24 20:08:53.673893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.814 qpair failed and we were unable to recover it. 00:29:05.814 [2024-07-24 20:08:53.674414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.814 [2024-07-24 20:08:53.674441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.814 qpair failed and we were unable to recover it. 00:29:05.814 [2024-07-24 20:08:53.674940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.814 [2024-07-24 20:08:53.674948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.814 qpair failed and we were unable to recover it. 00:29:05.814 [2024-07-24 20:08:53.675467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.814 [2024-07-24 20:08:53.675495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.814 qpair failed and we were unable to recover it. 00:29:05.814 [2024-07-24 20:08:53.675913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.814 [2024-07-24 20:08:53.675921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.814 qpair failed and we were unable to recover it. 00:29:05.814 [2024-07-24 20:08:53.676425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.814 [2024-07-24 20:08:53.676452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.814 qpair failed and we were unable to recover it. 00:29:05.814 [2024-07-24 20:08:53.676880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.814 [2024-07-24 20:08:53.676888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.814 qpair failed and we were unable to recover it. 00:29:05.814 [2024-07-24 20:08:53.677299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.814 [2024-07-24 20:08:53.677306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.814 qpair failed and we were unable to recover it. 00:29:05.814 [2024-07-24 20:08:53.677749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.814 [2024-07-24 20:08:53.677756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.814 qpair failed and we were unable to recover it. 00:29:05.814 [2024-07-24 20:08:53.678161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.814 [2024-07-24 20:08:53.678168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.814 qpair failed and we were unable to recover it. 00:29:05.814 [2024-07-24 20:08:53.678452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.814 [2024-07-24 20:08:53.678460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.814 qpair failed and we were unable to recover it. 00:29:05.814 [2024-07-24 20:08:53.678910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.814 [2024-07-24 20:08:53.678917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.814 qpair failed and we were unable to recover it. 00:29:05.814 [2024-07-24 20:08:53.679342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.814 [2024-07-24 20:08:53.679349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.814 qpair failed and we were unable to recover it. 00:29:05.814 [2024-07-24 20:08:53.679756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.814 [2024-07-24 20:08:53.679762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.814 qpair failed and we were unable to recover it. 00:29:05.814 [2024-07-24 20:08:53.680161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.814 [2024-07-24 20:08:53.680168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.814 qpair failed and we were unable to recover it. 00:29:05.814 [2024-07-24 20:08:53.680579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.814 [2024-07-24 20:08:53.680587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.814 qpair failed and we were unable to recover it. 00:29:05.814 [2024-07-24 20:08:53.681014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.814 [2024-07-24 20:08:53.681021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.814 qpair failed and we were unable to recover it. 00:29:05.814 [2024-07-24 20:08:53.681549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.814 [2024-07-24 20:08:53.681576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.814 qpair failed and we were unable to recover it. 00:29:05.814 [2024-07-24 20:08:53.681990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.814 [2024-07-24 20:08:53.681999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.814 qpair failed and we were unable to recover it. 00:29:05.814 [2024-07-24 20:08:53.682545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.814 [2024-07-24 20:08:53.682572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.814 qpair failed and we were unable to recover it. 00:29:05.814 [2024-07-24 20:08:53.682991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.814 [2024-07-24 20:08:53.682999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.814 qpair failed and we were unable to recover it. 00:29:05.814 [2024-07-24 20:08:53.683528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.814 [2024-07-24 20:08:53.683556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.814 qpair failed and we were unable to recover it. 00:29:05.814 [2024-07-24 20:08:53.683883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.814 [2024-07-24 20:08:53.683891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.814 qpair failed and we were unable to recover it. 00:29:05.814 [2024-07-24 20:08:53.684464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.814 [2024-07-24 20:08:53.684492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.814 qpair failed and we were unable to recover it. 00:29:05.814 [2024-07-24 20:08:53.684923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.814 [2024-07-24 20:08:53.684932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.815 qpair failed and we were unable to recover it. 00:29:05.815 [2024-07-24 20:08:53.685481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.815 [2024-07-24 20:08:53.685508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.815 qpair failed and we were unable to recover it. 00:29:05.815 [2024-07-24 20:08:53.685936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.815 [2024-07-24 20:08:53.685945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.815 qpair failed and we were unable to recover it. 00:29:05.815 [2024-07-24 20:08:53.686483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.815 [2024-07-24 20:08:53.686510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.815 qpair failed and we were unable to recover it. 00:29:05.815 [2024-07-24 20:08:53.686927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.815 [2024-07-24 20:08:53.686936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.815 qpair failed and we were unable to recover it. 00:29:05.815 [2024-07-24 20:08:53.687270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.815 [2024-07-24 20:08:53.687278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.815 qpair failed and we were unable to recover it. 00:29:05.815 [2024-07-24 20:08:53.687709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.815 [2024-07-24 20:08:53.687716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.815 qpair failed and we were unable to recover it. 00:29:05.815 [2024-07-24 20:08:53.688120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.815 [2024-07-24 20:08:53.688127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.815 qpair failed and we were unable to recover it. 00:29:05.815 [2024-07-24 20:08:53.688438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.815 [2024-07-24 20:08:53.688445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.815 qpair failed and we were unable to recover it. 00:29:05.815 [2024-07-24 20:08:53.688870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.815 [2024-07-24 20:08:53.688877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.815 qpair failed and we were unable to recover it. 00:29:05.815 [2024-07-24 20:08:53.689289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.815 [2024-07-24 20:08:53.689296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.815 qpair failed and we were unable to recover it. 00:29:05.815 [2024-07-24 20:08:53.689732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.815 [2024-07-24 20:08:53.689742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.815 qpair failed and we were unable to recover it. 00:29:05.815 [2024-07-24 20:08:53.690167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.815 [2024-07-24 20:08:53.690173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.815 qpair failed and we were unable to recover it. 00:29:05.815 [2024-07-24 20:08:53.690575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.815 [2024-07-24 20:08:53.690582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.815 qpair failed and we were unable to recover it. 00:29:05.815 [2024-07-24 20:08:53.690900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.815 [2024-07-24 20:08:53.690906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.815 qpair failed and we were unable to recover it. 00:29:05.815 [2024-07-24 20:08:53.691343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.815 [2024-07-24 20:08:53.691350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.815 qpair failed and we were unable to recover it. 00:29:05.815 [2024-07-24 20:08:53.691760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.815 [2024-07-24 20:08:53.691766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.815 qpair failed and we were unable to recover it. 00:29:05.815 [2024-07-24 20:08:53.692170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.815 [2024-07-24 20:08:53.692177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.815 qpair failed and we were unable to recover it. 00:29:05.815 [2024-07-24 20:08:53.692578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.815 [2024-07-24 20:08:53.692586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.815 qpair failed and we were unable to recover it. 00:29:05.815 [2024-07-24 20:08:53.692996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.815 [2024-07-24 20:08:53.693003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.815 qpair failed and we were unable to recover it. 00:29:05.815 [2024-07-24 20:08:53.693520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.815 [2024-07-24 20:08:53.693548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.815 qpair failed and we were unable to recover it. 00:29:05.815 [2024-07-24 20:08:53.694026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.815 [2024-07-24 20:08:53.694035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.815 qpair failed and we were unable to recover it. 00:29:05.815 [2024-07-24 20:08:53.694539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.815 [2024-07-24 20:08:53.694567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.815 qpair failed and we were unable to recover it. 00:29:05.815 [2024-07-24 20:08:53.695019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.815 [2024-07-24 20:08:53.695027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.815 qpair failed and we were unable to recover it. 00:29:05.815 [2024-07-24 20:08:53.695537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.815 [2024-07-24 20:08:53.695564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.815 qpair failed and we were unable to recover it. 00:29:05.815 [2024-07-24 20:08:53.695982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.815 [2024-07-24 20:08:53.695990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.815 qpair failed and we were unable to recover it. 00:29:05.815 [2024-07-24 20:08:53.696441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.815 [2024-07-24 20:08:53.696468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.815 qpair failed and we were unable to recover it. 00:29:05.815 [2024-07-24 20:08:53.696882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.815 [2024-07-24 20:08:53.696891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.815 qpair failed and we were unable to recover it. 00:29:05.815 [2024-07-24 20:08:53.697410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.815 [2024-07-24 20:08:53.697437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.815 qpair failed and we were unable to recover it. 00:29:05.815 [2024-07-24 20:08:53.697856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.815 [2024-07-24 20:08:53.697864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.815 qpair failed and we were unable to recover it. 00:29:05.815 [2024-07-24 20:08:53.698265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.815 [2024-07-24 20:08:53.698272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.815 qpair failed and we were unable to recover it. 00:29:05.815 [2024-07-24 20:08:53.698681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.815 [2024-07-24 20:08:53.698689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.815 qpair failed and we were unable to recover it. 00:29:05.815 [2024-07-24 20:08:53.699114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.815 [2024-07-24 20:08:53.699121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.815 qpair failed and we were unable to recover it. 00:29:05.815 [2024-07-24 20:08:53.699557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.815 [2024-07-24 20:08:53.699564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.815 qpair failed and we were unable to recover it. 00:29:05.815 [2024-07-24 20:08:53.699981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.815 [2024-07-24 20:08:53.699987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.815 qpair failed and we were unable to recover it. 00:29:05.815 [2024-07-24 20:08:53.700386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.815 [2024-07-24 20:08:53.700393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.815 qpair failed and we were unable to recover it. 00:29:05.815 [2024-07-24 20:08:53.700799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.815 [2024-07-24 20:08:53.700806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.815 qpair failed and we were unable to recover it. 00:29:05.815 [2024-07-24 20:08:53.701221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.815 [2024-07-24 20:08:53.701228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.815 qpair failed and we were unable to recover it. 00:29:05.815 [2024-07-24 20:08:53.701703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.816 [2024-07-24 20:08:53.701710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.816 qpair failed and we were unable to recover it. 00:29:05.816 [2024-07-24 20:08:53.702119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.816 [2024-07-24 20:08:53.702126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.816 qpair failed and we were unable to recover it. 00:29:05.816 [2024-07-24 20:08:53.702557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.816 [2024-07-24 20:08:53.702564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.816 qpair failed and we were unable to recover it. 00:29:05.816 [2024-07-24 20:08:53.703002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.816 [2024-07-24 20:08:53.703009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.816 qpair failed and we were unable to recover it. 00:29:05.816 [2024-07-24 20:08:53.703430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.816 [2024-07-24 20:08:53.703438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.816 qpair failed and we were unable to recover it. 00:29:05.816 [2024-07-24 20:08:53.703878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.816 [2024-07-24 20:08:53.703885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.816 qpair failed and we were unable to recover it. 00:29:05.816 [2024-07-24 20:08:53.704392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.816 [2024-07-24 20:08:53.704420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.816 qpair failed and we were unable to recover it. 00:29:05.816 [2024-07-24 20:08:53.704839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.816 [2024-07-24 20:08:53.704847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.816 qpair failed and we were unable to recover it. 00:29:05.816 [2024-07-24 20:08:53.705263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.816 [2024-07-24 20:08:53.705271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.816 qpair failed and we were unable to recover it. 00:29:05.816 [2024-07-24 20:08:53.705697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.816 [2024-07-24 20:08:53.705703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.816 qpair failed and we were unable to recover it. 00:29:05.816 [2024-07-24 20:08:53.706113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.816 [2024-07-24 20:08:53.706120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.816 qpair failed and we were unable to recover it. 00:29:05.816 [2024-07-24 20:08:53.706524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.816 [2024-07-24 20:08:53.706531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.816 qpair failed and we were unable to recover it. 00:29:05.816 [2024-07-24 20:08:53.706930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.816 [2024-07-24 20:08:53.706937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.816 qpair failed and we were unable to recover it. 00:29:05.816 [2024-07-24 20:08:53.707338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.816 [2024-07-24 20:08:53.707349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.816 qpair failed and we were unable to recover it. 00:29:05.816 [2024-07-24 20:08:53.707768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.816 [2024-07-24 20:08:53.707775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.816 qpair failed and we were unable to recover it. 00:29:05.816 [2024-07-24 20:08:53.708196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.816 [2024-07-24 20:08:53.708208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.816 qpair failed and we were unable to recover it. 00:29:05.816 [2024-07-24 20:08:53.708517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.816 [2024-07-24 20:08:53.708525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.816 qpair failed and we were unable to recover it. 00:29:05.816 [2024-07-24 20:08:53.708962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.816 [2024-07-24 20:08:53.708969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.816 qpair failed and we were unable to recover it. 00:29:05.816 [2024-07-24 20:08:53.709477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.816 [2024-07-24 20:08:53.709504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.816 qpair failed and we were unable to recover it. 00:29:05.816 [2024-07-24 20:08:53.709924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.816 [2024-07-24 20:08:53.709932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.816 qpair failed and we were unable to recover it. 00:29:05.816 [2024-07-24 20:08:53.710423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.816 [2024-07-24 20:08:53.710450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.816 qpair failed and we were unable to recover it. 00:29:05.816 [2024-07-24 20:08:53.710895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.816 [2024-07-24 20:08:53.710903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.816 qpair failed and we were unable to recover it. 00:29:05.816 [2024-07-24 20:08:53.711317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.816 [2024-07-24 20:08:53.711332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.816 qpair failed and we were unable to recover it. 00:29:05.816 [2024-07-24 20:08:53.711760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.816 [2024-07-24 20:08:53.711767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.816 qpair failed and we were unable to recover it. 00:29:05.816 [2024-07-24 20:08:53.712215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.816 [2024-07-24 20:08:53.712221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.816 qpair failed and we were unable to recover it. 00:29:05.816 [2024-07-24 20:08:53.712647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.816 [2024-07-24 20:08:53.712654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.816 qpair failed and we were unable to recover it. 00:29:05.816 [2024-07-24 20:08:53.713061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.816 [2024-07-24 20:08:53.713067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.816 qpair failed and we were unable to recover it. 00:29:05.816 [2024-07-24 20:08:53.713349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.816 [2024-07-24 20:08:53.713357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.816 qpair failed and we were unable to recover it. 00:29:05.816 [2024-07-24 20:08:53.713664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.816 [2024-07-24 20:08:53.713670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.816 qpair failed and we were unable to recover it. 00:29:05.816 [2024-07-24 20:08:53.714115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.816 [2024-07-24 20:08:53.714122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.816 qpair failed and we were unable to recover it. 00:29:05.816 [2024-07-24 20:08:53.714299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.816 [2024-07-24 20:08:53.714310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.816 qpair failed and we were unable to recover it. 00:29:05.816 [2024-07-24 20:08:53.714751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.816 [2024-07-24 20:08:53.714758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.816 qpair failed and we were unable to recover it. 00:29:05.816 [2024-07-24 20:08:53.715156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.816 [2024-07-24 20:08:53.715164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.816 qpair failed and we were unable to recover it. 00:29:05.816 [2024-07-24 20:08:53.715489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.816 [2024-07-24 20:08:53.715496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.816 qpair failed and we were unable to recover it. 00:29:05.816 [2024-07-24 20:08:53.715703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.816 [2024-07-24 20:08:53.715712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.816 qpair failed and we were unable to recover it. 00:29:05.816 [2024-07-24 20:08:53.716044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.816 [2024-07-24 20:08:53.716051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.816 qpair failed and we were unable to recover it. 00:29:05.816 [2024-07-24 20:08:53.716256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.816 [2024-07-24 20:08:53.716265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.816 qpair failed and we were unable to recover it. 00:29:05.816 [2024-07-24 20:08:53.716683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.816 [2024-07-24 20:08:53.716690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.816 qpair failed and we were unable to recover it. 00:29:05.817 [2024-07-24 20:08:53.717084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.817 [2024-07-24 20:08:53.717090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.817 qpair failed and we were unable to recover it. 00:29:05.817 [2024-07-24 20:08:53.717526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.817 [2024-07-24 20:08:53.717533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.817 qpair failed and we were unable to recover it. 00:29:05.817 [2024-07-24 20:08:53.717851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.817 [2024-07-24 20:08:53.717858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.817 qpair failed and we were unable to recover it. 00:29:05.817 [2024-07-24 20:08:53.718301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.817 [2024-07-24 20:08:53.718308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.817 qpair failed and we were unable to recover it. 00:29:05.817 [2024-07-24 20:08:53.718731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.817 [2024-07-24 20:08:53.718737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.817 qpair failed and we were unable to recover it. 00:29:05.817 [2024-07-24 20:08:53.719141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.817 [2024-07-24 20:08:53.719147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.817 qpair failed and we were unable to recover it. 00:29:05.817 [2024-07-24 20:08:53.719555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.817 [2024-07-24 20:08:53.719562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.817 qpair failed and we were unable to recover it. 00:29:05.817 [2024-07-24 20:08:53.719978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.817 [2024-07-24 20:08:53.719985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.817 qpair failed and we were unable to recover it. 00:29:05.817 [2024-07-24 20:08:53.720382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.817 [2024-07-24 20:08:53.720389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.817 qpair failed and we were unable to recover it. 00:29:05.817 [2024-07-24 20:08:53.720802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.817 [2024-07-24 20:08:53.720808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.817 qpair failed and we were unable to recover it. 00:29:05.817 [2024-07-24 20:08:53.721009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.817 [2024-07-24 20:08:53.721017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.817 qpair failed and we were unable to recover it. 00:29:05.817 [2024-07-24 20:08:53.721211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.817 [2024-07-24 20:08:53.721220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.817 qpair failed and we were unable to recover it. 00:29:05.817 [2024-07-24 20:08:53.721618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.817 [2024-07-24 20:08:53.721625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.817 qpair failed and we were unable to recover it. 00:29:05.817 [2024-07-24 20:08:53.722065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.817 [2024-07-24 20:08:53.722071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.817 qpair failed and we were unable to recover it. 00:29:05.817 [2024-07-24 20:08:53.722631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.817 [2024-07-24 20:08:53.722659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.817 qpair failed and we were unable to recover it. 00:29:05.817 [2024-07-24 20:08:53.723076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.817 [2024-07-24 20:08:53.723088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.817 qpair failed and we were unable to recover it. 00:29:05.817 [2024-07-24 20:08:53.723538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.817 [2024-07-24 20:08:53.723546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.817 qpair failed and we were unable to recover it. 00:29:05.817 [2024-07-24 20:08:53.723951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.817 [2024-07-24 20:08:53.723958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.817 qpair failed and we were unable to recover it. 00:29:05.817 [2024-07-24 20:08:53.724482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.817 [2024-07-24 20:08:53.724510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.817 qpair failed and we were unable to recover it. 00:29:05.817 [2024-07-24 20:08:53.724926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.817 [2024-07-24 20:08:53.724935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.817 qpair failed and we were unable to recover it. 00:29:05.817 [2024-07-24 20:08:53.725464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.817 [2024-07-24 20:08:53.725492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.817 qpair failed and we were unable to recover it. 00:29:05.817 [2024-07-24 20:08:53.725917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.817 [2024-07-24 20:08:53.725926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.817 qpair failed and we were unable to recover it. 00:29:05.817 [2024-07-24 20:08:53.726418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.817 [2024-07-24 20:08:53.726446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.817 qpair failed and we were unable to recover it. 00:29:05.817 [2024-07-24 20:08:53.726896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.817 [2024-07-24 20:08:53.726904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.817 qpair failed and we were unable to recover it. 00:29:05.817 [2024-07-24 20:08:53.727313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.817 [2024-07-24 20:08:53.727320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.817 qpair failed and we were unable to recover it. 00:29:05.817 [2024-07-24 20:08:53.727731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.817 [2024-07-24 20:08:53.727738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.817 qpair failed and we were unable to recover it. 00:29:05.817 [2024-07-24 20:08:53.728141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.817 [2024-07-24 20:08:53.728148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.817 qpair failed and we were unable to recover it. 00:29:05.817 [2024-07-24 20:08:53.728583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.817 [2024-07-24 20:08:53.728590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.817 qpair failed and we were unable to recover it. 00:29:05.817 [2024-07-24 20:08:53.729037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.817 [2024-07-24 20:08:53.729044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.817 qpair failed and we were unable to recover it. 00:29:05.817 [2024-07-24 20:08:53.729592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.817 [2024-07-24 20:08:53.729619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.817 qpair failed and we were unable to recover it. 00:29:05.817 [2024-07-24 20:08:53.730038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.817 [2024-07-24 20:08:53.730046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.817 qpair failed and we were unable to recover it. 00:29:05.817 [2024-07-24 20:08:53.730564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.817 [2024-07-24 20:08:53.730592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.817 qpair failed and we were unable to recover it. 00:29:05.817 [2024-07-24 20:08:53.731011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.817 [2024-07-24 20:08:53.731020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.817 qpair failed and we were unable to recover it. 00:29:05.817 [2024-07-24 20:08:53.731539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.817 [2024-07-24 20:08:53.731566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.817 qpair failed and we were unable to recover it. 00:29:05.817 [2024-07-24 20:08:53.731982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.817 [2024-07-24 20:08:53.731990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.817 qpair failed and we were unable to recover it. 00:29:05.817 [2024-07-24 20:08:53.732513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.817 [2024-07-24 20:08:53.732540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.817 qpair failed and we were unable to recover it. 00:29:05.817 [2024-07-24 20:08:53.732956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.817 [2024-07-24 20:08:53.732964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.817 qpair failed and we were unable to recover it. 00:29:05.817 [2024-07-24 20:08:53.733486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.817 [2024-07-24 20:08:53.733513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.818 qpair failed and we were unable to recover it. 00:29:05.818 [2024-07-24 20:08:53.733930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.818 [2024-07-24 20:08:53.733938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.818 qpair failed and we were unable to recover it. 00:29:05.818 [2024-07-24 20:08:53.734465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.818 [2024-07-24 20:08:53.734493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.818 qpair failed and we were unable to recover it. 00:29:05.818 [2024-07-24 20:08:53.734907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.818 [2024-07-24 20:08:53.734916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.818 qpair failed and we were unable to recover it. 00:29:05.818 [2024-07-24 20:08:53.735444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.818 [2024-07-24 20:08:53.735471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.818 qpair failed and we were unable to recover it. 00:29:05.818 [2024-07-24 20:08:53.735893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.818 [2024-07-24 20:08:53.735901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.818 qpair failed and we were unable to recover it. 00:29:05.818 [2024-07-24 20:08:53.736330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.818 [2024-07-24 20:08:53.736338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.818 qpair failed and we were unable to recover it. 00:29:05.818 [2024-07-24 20:08:53.736763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.818 [2024-07-24 20:08:53.736770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.818 qpair failed and we were unable to recover it. 00:29:05.818 [2024-07-24 20:08:53.737183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.818 [2024-07-24 20:08:53.737189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.818 qpair failed and we were unable to recover it. 00:29:05.818 [2024-07-24 20:08:53.737668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.818 [2024-07-24 20:08:53.737675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.818 qpair failed and we were unable to recover it. 00:29:05.818 [2024-07-24 20:08:53.738079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.818 [2024-07-24 20:08:53.738086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.818 qpair failed and we were unable to recover it. 00:29:05.818 [2024-07-24 20:08:53.738506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.818 [2024-07-24 20:08:53.738513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.818 qpair failed and we were unable to recover it. 00:29:05.818 [2024-07-24 20:08:53.738959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.818 [2024-07-24 20:08:53.738966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.818 qpair failed and we were unable to recover it. 00:29:05.818 [2024-07-24 20:08:53.739488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.818 [2024-07-24 20:08:53.739516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.818 qpair failed and we were unable to recover it. 00:29:05.818 [2024-07-24 20:08:53.740005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.818 [2024-07-24 20:08:53.740013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.818 qpair failed and we were unable to recover it. 00:29:05.818 [2024-07-24 20:08:53.740541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.818 [2024-07-24 20:08:53.740569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.818 qpair failed and we were unable to recover it. 00:29:05.818 [2024-07-24 20:08:53.740996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.818 [2024-07-24 20:08:53.741004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.818 qpair failed and we were unable to recover it. 00:29:05.818 [2024-07-24 20:08:53.741419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.818 [2024-07-24 20:08:53.741446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.818 qpair failed and we were unable to recover it. 00:29:05.818 [2024-07-24 20:08:53.741890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.818 [2024-07-24 20:08:53.741902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.818 qpair failed and we were unable to recover it. 00:29:05.818 [2024-07-24 20:08:53.742399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.818 [2024-07-24 20:08:53.742427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.818 qpair failed and we were unable to recover it. 00:29:05.818 [2024-07-24 20:08:53.742843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.818 [2024-07-24 20:08:53.742851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.818 qpair failed and we were unable to recover it. 00:29:05.818 [2024-07-24 20:08:53.743208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.818 [2024-07-24 20:08:53.743216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.818 qpair failed and we were unable to recover it. 00:29:05.818 [2024-07-24 20:08:53.743635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.818 [2024-07-24 20:08:53.743642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.818 qpair failed and we were unable to recover it. 00:29:05.818 [2024-07-24 20:08:53.744105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.818 [2024-07-24 20:08:53.744111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.818 qpair failed and we were unable to recover it. 00:29:05.818 [2024-07-24 20:08:53.744615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.818 [2024-07-24 20:08:53.744643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:05.818 qpair failed and we were unable to recover it. 00:29:06.088 [2024-07-24 20:08:53.745054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.088 [2024-07-24 20:08:53.745064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.088 qpair failed and we were unable to recover it. 00:29:06.088 [2024-07-24 20:08:53.745563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.088 [2024-07-24 20:08:53.745590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.088 qpair failed and we were unable to recover it. 00:29:06.088 [2024-07-24 20:08:53.745911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.088 [2024-07-24 20:08:53.745919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.088 qpair failed and we were unable to recover it. 00:29:06.088 [2024-07-24 20:08:53.746459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.088 [2024-07-24 20:08:53.746486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.088 qpair failed and we were unable to recover it. 00:29:06.088 [2024-07-24 20:08:53.746912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.088 [2024-07-24 20:08:53.746922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.088 qpair failed and we were unable to recover it. 00:29:06.088 [2024-07-24 20:08:53.747256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.088 [2024-07-24 20:08:53.747266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.088 qpair failed and we were unable to recover it. 00:29:06.088 [2024-07-24 20:08:53.747674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.088 [2024-07-24 20:08:53.747680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.088 qpair failed and we were unable to recover it. 00:29:06.088 [2024-07-24 20:08:53.748089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.088 [2024-07-24 20:08:53.748095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.088 qpair failed and we were unable to recover it. 00:29:06.088 [2024-07-24 20:08:53.748527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.088 [2024-07-24 20:08:53.748534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.088 qpair failed and we were unable to recover it. 00:29:06.088 [2024-07-24 20:08:53.748932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.088 [2024-07-24 20:08:53.748939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.088 qpair failed and we were unable to recover it. 00:29:06.088 [2024-07-24 20:08:53.749326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.089 [2024-07-24 20:08:53.749333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.089 qpair failed and we were unable to recover it. 00:29:06.089 [2024-07-24 20:08:53.749770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.089 [2024-07-24 20:08:53.749777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.089 qpair failed and we were unable to recover it. 00:29:06.089 [2024-07-24 20:08:53.750177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.089 [2024-07-24 20:08:53.750184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.089 qpair failed and we were unable to recover it. 00:29:06.089 [2024-07-24 20:08:53.750616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.089 [2024-07-24 20:08:53.750623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.089 qpair failed and we were unable to recover it. 00:29:06.089 [2024-07-24 20:08:53.751046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.089 [2024-07-24 20:08:53.751053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.089 qpair failed and we were unable to recover it. 00:29:06.089 [2024-07-24 20:08:53.751571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.089 [2024-07-24 20:08:53.751598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.089 qpair failed and we were unable to recover it. 00:29:06.089 [2024-07-24 20:08:53.752094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.089 [2024-07-24 20:08:53.752102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.089 qpair failed and we were unable to recover it. 00:29:06.089 [2024-07-24 20:08:53.752542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.089 [2024-07-24 20:08:53.752549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.089 qpair failed and we were unable to recover it. 00:29:06.089 [2024-07-24 20:08:53.752957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.089 [2024-07-24 20:08:53.752963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.089 qpair failed and we were unable to recover it. 00:29:06.089 [2024-07-24 20:08:53.753490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.089 [2024-07-24 20:08:53.753517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.089 qpair failed and we were unable to recover it. 00:29:06.089 [2024-07-24 20:08:53.753931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.089 [2024-07-24 20:08:53.753939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.089 qpair failed and we were unable to recover it. 00:29:06.089 [2024-07-24 20:08:53.754460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.089 [2024-07-24 20:08:53.754487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.089 qpair failed and we were unable to recover it. 00:29:06.089 [2024-07-24 20:08:53.754905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.089 [2024-07-24 20:08:53.754913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.089 qpair failed and we were unable to recover it. 00:29:06.089 [2024-07-24 20:08:53.755220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.089 [2024-07-24 20:08:53.755228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.089 qpair failed and we were unable to recover it. 00:29:06.089 [2024-07-24 20:08:53.755663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.089 [2024-07-24 20:08:53.755670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.089 qpair failed and we were unable to recover it. 00:29:06.089 [2024-07-24 20:08:53.755978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.089 [2024-07-24 20:08:53.755985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.089 qpair failed and we were unable to recover it. 00:29:06.089 [2024-07-24 20:08:53.756512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.089 [2024-07-24 20:08:53.756539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.089 qpair failed and we were unable to recover it. 00:29:06.089 [2024-07-24 20:08:53.757039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.089 [2024-07-24 20:08:53.757047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.089 qpair failed and we were unable to recover it. 00:29:06.089 [2024-07-24 20:08:53.757547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.089 [2024-07-24 20:08:53.757574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.089 qpair failed and we were unable to recover it. 00:29:06.089 [2024-07-24 20:08:53.757993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.089 [2024-07-24 20:08:53.758002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.089 qpair failed and we were unable to recover it. 00:29:06.089 [2024-07-24 20:08:53.758531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.089 [2024-07-24 20:08:53.758558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.089 qpair failed and we were unable to recover it. 00:29:06.089 [2024-07-24 20:08:53.758970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.089 [2024-07-24 20:08:53.758979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.089 qpair failed and we were unable to recover it. 00:29:06.089 [2024-07-24 20:08:53.759499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.089 [2024-07-24 20:08:53.759526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.089 qpair failed and we were unable to recover it. 00:29:06.089 [2024-07-24 20:08:53.759940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.089 [2024-07-24 20:08:53.759952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.089 qpair failed and we were unable to recover it. 00:29:06.089 [2024-07-24 20:08:53.760489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.089 [2024-07-24 20:08:53.760517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.089 qpair failed and we were unable to recover it. 00:29:06.089 [2024-07-24 20:08:53.760811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.089 [2024-07-24 20:08:53.760820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.089 qpair failed and we were unable to recover it. 00:29:06.089 [2024-07-24 20:08:53.761217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.089 [2024-07-24 20:08:53.761224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.089 qpair failed and we were unable to recover it. 00:29:06.089 [2024-07-24 20:08:53.761451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.089 [2024-07-24 20:08:53.761460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.089 qpair failed and we were unable to recover it. 00:29:06.089 [2024-07-24 20:08:53.761893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.089 [2024-07-24 20:08:53.761900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.089 qpair failed and we were unable to recover it. 00:29:06.089 [2024-07-24 20:08:53.762304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.089 [2024-07-24 20:08:53.762312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.089 qpair failed and we were unable to recover it. 00:29:06.089 [2024-07-24 20:08:53.762807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.089 [2024-07-24 20:08:53.762814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.089 qpair failed and we were unable to recover it. 00:29:06.089 [2024-07-24 20:08:53.763217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.089 [2024-07-24 20:08:53.763224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.089 qpair failed and we were unable to recover it. 00:29:06.089 [2024-07-24 20:08:53.763619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.089 [2024-07-24 20:08:53.763626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.089 qpair failed and we were unable to recover it. 00:29:06.089 [2024-07-24 20:08:53.764117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.089 [2024-07-24 20:08:53.764124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.089 qpair failed and we were unable to recover it. 00:29:06.089 [2024-07-24 20:08:53.764540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.090 [2024-07-24 20:08:53.764546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.090 qpair failed and we were unable to recover it. 00:29:06.090 [2024-07-24 20:08:53.764743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.090 [2024-07-24 20:08:53.764750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.090 qpair failed and we were unable to recover it. 00:29:06.090 [2024-07-24 20:08:53.765219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.090 [2024-07-24 20:08:53.765226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.090 qpair failed and we were unable to recover it. 00:29:06.090 [2024-07-24 20:08:53.765647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.090 [2024-07-24 20:08:53.765654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.090 qpair failed and we were unable to recover it. 00:29:06.090 [2024-07-24 20:08:53.766053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.090 [2024-07-24 20:08:53.766060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.090 qpair failed and we were unable to recover it. 00:29:06.090 [2024-07-24 20:08:53.766463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.090 [2024-07-24 20:08:53.766470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.090 qpair failed and we were unable to recover it. 00:29:06.090 [2024-07-24 20:08:53.766882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.090 [2024-07-24 20:08:53.766889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.090 qpair failed and we were unable to recover it. 00:29:06.090 [2024-07-24 20:08:53.767293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.090 [2024-07-24 20:08:53.767301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.090 qpair failed and we were unable to recover it. 00:29:06.090 [2024-07-24 20:08:53.767735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.090 [2024-07-24 20:08:53.767742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.090 qpair failed and we were unable to recover it. 00:29:06.090 [2024-07-24 20:08:53.768228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.090 [2024-07-24 20:08:53.768235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.090 qpair failed and we were unable to recover it. 00:29:06.090 [2024-07-24 20:08:53.768438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.090 [2024-07-24 20:08:53.768446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.090 qpair failed and we were unable to recover it. 00:29:06.090 [2024-07-24 20:08:53.768872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.090 [2024-07-24 20:08:53.768878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.090 qpair failed and we were unable to recover it. 00:29:06.090 [2024-07-24 20:08:53.769279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.090 [2024-07-24 20:08:53.769286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.090 qpair failed and we were unable to recover it. 00:29:06.090 [2024-07-24 20:08:53.769565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.090 [2024-07-24 20:08:53.769573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.090 qpair failed and we were unable to recover it. 00:29:06.090 [2024-07-24 20:08:53.769995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.090 [2024-07-24 20:08:53.770002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.090 qpair failed and we were unable to recover it. 00:29:06.090 [2024-07-24 20:08:53.770425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.090 [2024-07-24 20:08:53.770432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.090 qpair failed and we were unable to recover it. 00:29:06.090 [2024-07-24 20:08:53.770902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.090 [2024-07-24 20:08:53.770909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.090 qpair failed and we were unable to recover it. 00:29:06.090 [2024-07-24 20:08:53.771104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.090 [2024-07-24 20:08:53.771112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.090 qpair failed and we were unable to recover it. 00:29:06.090 [2024-07-24 20:08:53.771479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.090 [2024-07-24 20:08:53.771486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.090 qpair failed and we were unable to recover it. 00:29:06.090 [2024-07-24 20:08:53.771817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.090 [2024-07-24 20:08:53.771824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.090 qpair failed and we were unable to recover it. 00:29:06.090 [2024-07-24 20:08:53.772222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.090 [2024-07-24 20:08:53.772229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.090 qpair failed and we were unable to recover it. 00:29:06.090 [2024-07-24 20:08:53.772722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.090 [2024-07-24 20:08:53.772730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.090 qpair failed and we were unable to recover it. 00:29:06.090 [2024-07-24 20:08:53.773154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.090 [2024-07-24 20:08:53.773161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.090 qpair failed and we were unable to recover it. 00:29:06.090 [2024-07-24 20:08:53.773594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.090 [2024-07-24 20:08:53.773601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.090 qpair failed and we were unable to recover it. 00:29:06.090 [2024-07-24 20:08:53.774022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.090 [2024-07-24 20:08:53.774029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.090 qpair failed and we were unable to recover it. 00:29:06.090 [2024-07-24 20:08:53.774314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.090 [2024-07-24 20:08:53.774322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.090 qpair failed and we were unable to recover it. 00:29:06.090 [2024-07-24 20:08:53.774809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.090 [2024-07-24 20:08:53.774816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.090 qpair failed and we were unable to recover it. 00:29:06.090 [2024-07-24 20:08:53.775281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.090 [2024-07-24 20:08:53.775288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.090 qpair failed and we were unable to recover it. 00:29:06.090 [2024-07-24 20:08:53.775594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.090 [2024-07-24 20:08:53.775601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.090 qpair failed and we were unable to recover it. 00:29:06.090 [2024-07-24 20:08:53.776054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.090 [2024-07-24 20:08:53.776063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.090 qpair failed and we were unable to recover it. 00:29:06.090 [2024-07-24 20:08:53.776463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.090 [2024-07-24 20:08:53.776470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.090 qpair failed and we were unable to recover it. 00:29:06.090 [2024-07-24 20:08:53.776910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.090 [2024-07-24 20:08:53.776917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.090 qpair failed and we were unable to recover it. 00:29:06.090 [2024-07-24 20:08:53.777425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.090 [2024-07-24 20:08:53.777452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.090 qpair failed and we were unable to recover it. 00:29:06.090 [2024-07-24 20:08:53.777662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.090 [2024-07-24 20:08:53.777672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.090 qpair failed and we were unable to recover it. 00:29:06.090 [2024-07-24 20:08:53.778066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.090 [2024-07-24 20:08:53.778074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.090 qpair failed and we were unable to recover it. 00:29:06.090 [2024-07-24 20:08:53.778274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.090 [2024-07-24 20:08:53.778283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.090 qpair failed and we were unable to recover it. 00:29:06.090 [2024-07-24 20:08:53.778721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.090 [2024-07-24 20:08:53.778727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.090 qpair failed and we were unable to recover it. 00:29:06.090 [2024-07-24 20:08:53.779139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.091 [2024-07-24 20:08:53.779145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.091 qpair failed and we were unable to recover it. 00:29:06.091 [2024-07-24 20:08:53.779566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.091 [2024-07-24 20:08:53.779574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.091 qpair failed and we were unable to recover it. 00:29:06.091 [2024-07-24 20:08:53.779940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.091 [2024-07-24 20:08:53.779946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.091 qpair failed and we were unable to recover it. 00:29:06.091 [2024-07-24 20:08:53.780273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.091 [2024-07-24 20:08:53.780280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.091 qpair failed and we were unable to recover it. 00:29:06.091 [2024-07-24 20:08:53.780715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.091 [2024-07-24 20:08:53.780721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.091 qpair failed and we were unable to recover it. 00:29:06.091 [2024-07-24 20:08:53.781129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.091 [2024-07-24 20:08:53.781135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.091 qpair failed and we were unable to recover it. 00:29:06.091 [2024-07-24 20:08:53.781573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.091 [2024-07-24 20:08:53.781580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.091 qpair failed and we were unable to recover it. 00:29:06.091 [2024-07-24 20:08:53.782018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.091 [2024-07-24 20:08:53.782025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.091 qpair failed and we were unable to recover it. 00:29:06.091 [2024-07-24 20:08:53.782370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.091 [2024-07-24 20:08:53.782377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.091 qpair failed and we were unable to recover it. 00:29:06.091 [2024-07-24 20:08:53.782760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.091 [2024-07-24 20:08:53.782768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.091 qpair failed and we were unable to recover it. 00:29:06.091 [2024-07-24 20:08:53.783054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.091 [2024-07-24 20:08:53.783062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.091 qpair failed and we were unable to recover it. 00:29:06.091 [2024-07-24 20:08:53.783270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.091 [2024-07-24 20:08:53.783279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.091 qpair failed and we were unable to recover it. 00:29:06.091 [2024-07-24 20:08:53.783690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.091 [2024-07-24 20:08:53.783697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.091 qpair failed and we were unable to recover it. 00:29:06.091 [2024-07-24 20:08:53.784116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.091 [2024-07-24 20:08:53.784123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.091 qpair failed and we were unable to recover it. 00:29:06.091 [2024-07-24 20:08:53.784534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.091 [2024-07-24 20:08:53.784541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.091 qpair failed and we were unable to recover it. 00:29:06.091 [2024-07-24 20:08:53.784942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.091 [2024-07-24 20:08:53.784949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.091 qpair failed and we were unable to recover it. 00:29:06.091 [2024-07-24 20:08:53.785391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.091 [2024-07-24 20:08:53.785398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.091 qpair failed and we were unable to recover it. 00:29:06.091 [2024-07-24 20:08:53.785801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.091 [2024-07-24 20:08:53.785808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.091 qpair failed and we were unable to recover it. 00:29:06.091 [2024-07-24 20:08:53.786253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.091 [2024-07-24 20:08:53.786260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.091 qpair failed and we were unable to recover it. 00:29:06.091 [2024-07-24 20:08:53.786675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.091 [2024-07-24 20:08:53.786682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.091 qpair failed and we were unable to recover it. 00:29:06.091 [2024-07-24 20:08:53.787082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.091 [2024-07-24 20:08:53.787088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.091 qpair failed and we were unable to recover it. 00:29:06.091 [2024-07-24 20:08:53.787570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.091 [2024-07-24 20:08:53.787577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.091 qpair failed and we were unable to recover it. 00:29:06.091 [2024-07-24 20:08:53.788000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.091 [2024-07-24 20:08:53.788007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.091 qpair failed and we were unable to recover it. 00:29:06.091 [2024-07-24 20:08:53.788439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.091 [2024-07-24 20:08:53.788445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.091 qpair failed and we were unable to recover it. 00:29:06.091 [2024-07-24 20:08:53.788855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.091 [2024-07-24 20:08:53.788861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.091 qpair failed and we were unable to recover it. 00:29:06.091 [2024-07-24 20:08:53.789264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.091 [2024-07-24 20:08:53.789271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.091 qpair failed and we were unable to recover it. 00:29:06.091 [2024-07-24 20:08:53.789690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.091 [2024-07-24 20:08:53.789696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.091 qpair failed and we were unable to recover it. 00:29:06.091 [2024-07-24 20:08:53.790096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.091 [2024-07-24 20:08:53.790103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.091 qpair failed and we were unable to recover it. 00:29:06.091 [2024-07-24 20:08:53.790568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.091 [2024-07-24 20:08:53.790575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.091 qpair failed and we were unable to recover it. 00:29:06.091 [2024-07-24 20:08:53.790972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.091 [2024-07-24 20:08:53.790978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.091 qpair failed and we were unable to recover it. 00:29:06.091 [2024-07-24 20:08:53.791377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.091 [2024-07-24 20:08:53.791384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.091 qpair failed and we were unable to recover it. 00:29:06.091 [2024-07-24 20:08:53.791800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.091 [2024-07-24 20:08:53.791807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.091 qpair failed and we were unable to recover it. 00:29:06.091 [2024-07-24 20:08:53.792262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.091 [2024-07-24 20:08:53.792271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.091 qpair failed and we were unable to recover it. 00:29:06.091 [2024-07-24 20:08:53.792696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.091 [2024-07-24 20:08:53.792704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.091 qpair failed and we were unable to recover it. 00:29:06.091 [2024-07-24 20:08:53.793126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.091 [2024-07-24 20:08:53.793132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.091 qpair failed and we were unable to recover it. 00:29:06.091 [2024-07-24 20:08:53.793608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.091 [2024-07-24 20:08:53.793615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.091 qpair failed and we were unable to recover it. 00:29:06.091 [2024-07-24 20:08:53.794015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.091 [2024-07-24 20:08:53.794021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.091 qpair failed and we were unable to recover it. 00:29:06.091 [2024-07-24 20:08:53.794579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.092 [2024-07-24 20:08:53.794606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.092 qpair failed and we were unable to recover it. 00:29:06.092 [2024-07-24 20:08:53.795106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.092 [2024-07-24 20:08:53.795115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.092 qpair failed and we were unable to recover it. 00:29:06.092 [2024-07-24 20:08:53.795593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.092 [2024-07-24 20:08:53.795600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.092 qpair failed and we were unable to recover it. 00:29:06.092 [2024-07-24 20:08:53.796035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.092 [2024-07-24 20:08:53.796042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.092 qpair failed and we were unable to recover it. 00:29:06.092 [2024-07-24 20:08:53.796455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.092 [2024-07-24 20:08:53.796483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.092 qpair failed and we were unable to recover it. 00:29:06.092 [2024-07-24 20:08:53.796962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.092 [2024-07-24 20:08:53.796971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.092 qpair failed and we were unable to recover it. 00:29:06.092 [2024-07-24 20:08:53.797490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.092 [2024-07-24 20:08:53.797517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.092 qpair failed and we were unable to recover it. 00:29:06.092 [2024-07-24 20:08:53.798004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.092 [2024-07-24 20:08:53.798012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.092 qpair failed and we were unable to recover it. 00:29:06.092 [2024-07-24 20:08:53.798456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.092 [2024-07-24 20:08:53.798484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.092 qpair failed and we were unable to recover it. 00:29:06.092 [2024-07-24 20:08:53.798903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.092 [2024-07-24 20:08:53.798911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.092 qpair failed and we were unable to recover it. 00:29:06.092 [2024-07-24 20:08:53.799477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.092 [2024-07-24 20:08:53.799505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.092 qpair failed and we were unable to recover it. 00:29:06.092 [2024-07-24 20:08:53.799847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.092 [2024-07-24 20:08:53.799855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.092 qpair failed and we were unable to recover it. 00:29:06.092 [2024-07-24 20:08:53.800277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.092 [2024-07-24 20:08:53.800284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.092 qpair failed and we were unable to recover it. 00:29:06.092 [2024-07-24 20:08:53.800731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.092 [2024-07-24 20:08:53.800737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.092 qpair failed and we were unable to recover it. 00:29:06.092 [2024-07-24 20:08:53.801053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.092 [2024-07-24 20:08:53.801060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.092 qpair failed and we were unable to recover it. 00:29:06.092 [2024-07-24 20:08:53.801487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.092 [2024-07-24 20:08:53.801495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.092 qpair failed and we were unable to recover it. 00:29:06.092 [2024-07-24 20:08:53.801919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.092 [2024-07-24 20:08:53.801927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.092 qpair failed and we were unable to recover it. 00:29:06.092 [2024-07-24 20:08:53.802477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.092 [2024-07-24 20:08:53.802505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.092 qpair failed and we were unable to recover it. 00:29:06.092 [2024-07-24 20:08:53.802982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.092 [2024-07-24 20:08:53.802990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.092 qpair failed and we were unable to recover it. 00:29:06.092 [2024-07-24 20:08:53.803536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.092 [2024-07-24 20:08:53.803564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.092 qpair failed and we were unable to recover it. 00:29:06.092 [2024-07-24 20:08:53.804050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.092 [2024-07-24 20:08:53.804058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.092 qpair failed and we were unable to recover it. 00:29:06.092 [2024-07-24 20:08:53.804488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.092 [2024-07-24 20:08:53.804515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.092 qpair failed and we were unable to recover it. 00:29:06.092 [2024-07-24 20:08:53.804838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.092 [2024-07-24 20:08:53.804847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.092 qpair failed and we were unable to recover it. 00:29:06.092 [2024-07-24 20:08:53.805367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.092 [2024-07-24 20:08:53.805395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.092 qpair failed and we were unable to recover it. 00:29:06.092 [2024-07-24 20:08:53.805855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.092 [2024-07-24 20:08:53.805863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.092 qpair failed and we were unable to recover it. 00:29:06.092 [2024-07-24 20:08:53.806187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.092 [2024-07-24 20:08:53.806194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.092 qpair failed and we were unable to recover it. 00:29:06.092 [2024-07-24 20:08:53.806619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.092 [2024-07-24 20:08:53.806626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.092 qpair failed and we were unable to recover it. 00:29:06.092 [2024-07-24 20:08:53.807071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.092 [2024-07-24 20:08:53.807077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.092 qpair failed and we were unable to recover it. 00:29:06.092 [2024-07-24 20:08:53.807626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.092 [2024-07-24 20:08:53.807653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.092 qpair failed and we were unable to recover it. 00:29:06.092 [2024-07-24 20:08:53.807960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.092 [2024-07-24 20:08:53.807969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.092 qpair failed and we were unable to recover it. 00:29:06.092 [2024-07-24 20:08:53.808492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.092 [2024-07-24 20:08:53.808519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.092 qpair failed and we were unable to recover it. 00:29:06.092 [2024-07-24 20:08:53.809013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.092 [2024-07-24 20:08:53.809021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.092 qpair failed and we were unable to recover it. 00:29:06.092 [2024-07-24 20:08:53.809527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.092 [2024-07-24 20:08:53.809554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.092 qpair failed and we were unable to recover it. 00:29:06.092 [2024-07-24 20:08:53.809890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.092 [2024-07-24 20:08:53.809898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.092 qpair failed and we were unable to recover it. 00:29:06.092 [2024-07-24 20:08:53.810194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.092 [2024-07-24 20:08:53.810206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.092 qpair failed and we were unable to recover it. 00:29:06.092 [2024-07-24 20:08:53.810691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.092 [2024-07-24 20:08:53.810701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.092 qpair failed and we were unable to recover it. 00:29:06.092 [2024-07-24 20:08:53.811150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.092 [2024-07-24 20:08:53.811157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.092 qpair failed and we were unable to recover it. 00:29:06.092 [2024-07-24 20:08:53.811677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.092 [2024-07-24 20:08:53.811705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.093 qpair failed and we were unable to recover it. 00:29:06.093 [2024-07-24 20:08:53.812183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.093 [2024-07-24 20:08:53.812192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.093 qpair failed and we were unable to recover it. 00:29:06.093 [2024-07-24 20:08:53.812663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.093 [2024-07-24 20:08:53.812690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.093 qpair failed and we were unable to recover it. 00:29:06.093 [2024-07-24 20:08:53.813107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.093 [2024-07-24 20:08:53.813116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.093 qpair failed and we were unable to recover it. 00:29:06.093 [2024-07-24 20:08:53.813634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.093 [2024-07-24 20:08:53.813662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.093 qpair failed and we were unable to recover it. 00:29:06.093 [2024-07-24 20:08:53.813998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.093 [2024-07-24 20:08:53.814007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.093 qpair failed and we were unable to recover it. 00:29:06.093 [2024-07-24 20:08:53.814497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.093 [2024-07-24 20:08:53.814524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.093 qpair failed and we were unable to recover it. 00:29:06.093 [2024-07-24 20:08:53.814946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.093 [2024-07-24 20:08:53.814954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.093 qpair failed and we were unable to recover it. 00:29:06.093 [2024-07-24 20:08:53.815491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.093 [2024-07-24 20:08:53.815518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.093 qpair failed and we were unable to recover it. 00:29:06.093 [2024-07-24 20:08:53.815983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.093 [2024-07-24 20:08:53.815991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.093 qpair failed and we were unable to recover it. 00:29:06.093 [2024-07-24 20:08:53.816512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.093 [2024-07-24 20:08:53.816540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.093 qpair failed and we were unable to recover it. 00:29:06.093 [2024-07-24 20:08:53.816978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.093 [2024-07-24 20:08:53.816986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.093 qpair failed and we were unable to recover it. 00:29:06.093 [2024-07-24 20:08:53.817510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.093 [2024-07-24 20:08:53.817538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.093 qpair failed and we were unable to recover it. 00:29:06.093 [2024-07-24 20:08:53.817985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.093 [2024-07-24 20:08:53.817993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.093 qpair failed and we were unable to recover it. 00:29:06.093 [2024-07-24 20:08:53.818580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.093 [2024-07-24 20:08:53.818608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.093 qpair failed and we were unable to recover it. 00:29:06.093 [2024-07-24 20:08:53.819036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.093 [2024-07-24 20:08:53.819044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.093 qpair failed and we were unable to recover it. 00:29:06.093 [2024-07-24 20:08:53.819532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.093 [2024-07-24 20:08:53.819559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.093 qpair failed and we were unable to recover it. 00:29:06.093 [2024-07-24 20:08:53.819978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.093 [2024-07-24 20:08:53.819987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.093 qpair failed and we were unable to recover it. 00:29:06.093 [2024-07-24 20:08:53.820535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.093 [2024-07-24 20:08:53.820563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.093 qpair failed and we were unable to recover it. 00:29:06.093 [2024-07-24 20:08:53.820948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.093 [2024-07-24 20:08:53.820956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.093 qpair failed and we were unable to recover it. 00:29:06.093 [2024-07-24 20:08:53.821498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.093 [2024-07-24 20:08:53.821526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.093 qpair failed and we were unable to recover it. 00:29:06.093 [2024-07-24 20:08:53.821840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.093 [2024-07-24 20:08:53.821849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.093 qpair failed and we were unable to recover it. 00:29:06.093 [2024-07-24 20:08:53.822260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.093 [2024-07-24 20:08:53.822267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.093 qpair failed and we were unable to recover it. 00:29:06.093 [2024-07-24 20:08:53.822689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.093 [2024-07-24 20:08:53.822695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.093 qpair failed and we were unable to recover it. 00:29:06.093 [2024-07-24 20:08:53.822942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.093 [2024-07-24 20:08:53.822948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.093 qpair failed and we were unable to recover it. 00:29:06.093 [2024-07-24 20:08:53.823384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.093 [2024-07-24 20:08:53.823391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.093 qpair failed and we were unable to recover it. 00:29:06.093 [2024-07-24 20:08:53.823799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.093 [2024-07-24 20:08:53.823805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.093 qpair failed and we were unable to recover it. 00:29:06.093 [2024-07-24 20:08:53.824069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.093 [2024-07-24 20:08:53.824076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.093 qpair failed and we were unable to recover it. 00:29:06.093 [2024-07-24 20:08:53.824575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.093 [2024-07-24 20:08:53.824582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.093 qpair failed and we were unable to recover it. 00:29:06.093 [2024-07-24 20:08:53.824986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.093 [2024-07-24 20:08:53.824992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.093 qpair failed and we were unable to recover it. 00:29:06.093 [2024-07-24 20:08:53.825508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.094 [2024-07-24 20:08:53.825535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.094 qpair failed and we were unable to recover it. 00:29:06.094 [2024-07-24 20:08:53.825955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.094 [2024-07-24 20:08:53.825963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.094 qpair failed and we were unable to recover it. 00:29:06.094 [2024-07-24 20:08:53.826415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.094 [2024-07-24 20:08:53.826442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.094 qpair failed and we were unable to recover it. 00:29:06.094 [2024-07-24 20:08:53.826769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.094 [2024-07-24 20:08:53.826778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.094 qpair failed and we were unable to recover it. 00:29:06.094 [2024-07-24 20:08:53.827225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.094 [2024-07-24 20:08:53.827233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.094 qpair failed and we were unable to recover it. 00:29:06.094 [2024-07-24 20:08:53.827668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.094 [2024-07-24 20:08:53.827675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.094 qpair failed and we were unable to recover it. 00:29:06.094 [2024-07-24 20:08:53.828099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.094 [2024-07-24 20:08:53.828105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.094 qpair failed and we were unable to recover it. 00:29:06.094 [2024-07-24 20:08:53.828532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.094 [2024-07-24 20:08:53.828539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.094 qpair failed and we were unable to recover it. 00:29:06.094 [2024-07-24 20:08:53.828954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.094 [2024-07-24 20:08:53.828965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.094 qpair failed and we were unable to recover it. 00:29:06.094 [2024-07-24 20:08:53.829397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.094 [2024-07-24 20:08:53.829404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.094 qpair failed and we were unable to recover it. 00:29:06.094 [2024-07-24 20:08:53.829815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.094 [2024-07-24 20:08:53.829821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.094 qpair failed and we were unable to recover it. 00:29:06.094 [2024-07-24 20:08:53.830243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.094 [2024-07-24 20:08:53.830251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.094 qpair failed and we were unable to recover it. 00:29:06.094 [2024-07-24 20:08:53.830562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.094 [2024-07-24 20:08:53.830569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.094 qpair failed and we were unable to recover it. 00:29:06.094 [2024-07-24 20:08:53.830969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.094 [2024-07-24 20:08:53.830975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.094 qpair failed and we were unable to recover it. 00:29:06.094 [2024-07-24 20:08:53.831381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.094 [2024-07-24 20:08:53.831388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.094 qpair failed and we were unable to recover it. 00:29:06.094 [2024-07-24 20:08:53.831778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.094 [2024-07-24 20:08:53.831786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.094 qpair failed and we were unable to recover it. 00:29:06.094 [2024-07-24 20:08:53.832199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.094 [2024-07-24 20:08:53.832209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.094 qpair failed and we were unable to recover it. 00:29:06.094 [2024-07-24 20:08:53.832616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.094 [2024-07-24 20:08:53.832622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.094 qpair failed and we were unable to recover it. 00:29:06.094 [2024-07-24 20:08:53.832727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.094 [2024-07-24 20:08:53.832737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.094 qpair failed and we were unable to recover it. 00:29:06.094 [2024-07-24 20:08:53.833136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.094 [2024-07-24 20:08:53.833143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.094 qpair failed and we were unable to recover it. 00:29:06.094 [2024-07-24 20:08:53.833367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.094 [2024-07-24 20:08:53.833375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.094 qpair failed and we were unable to recover it. 00:29:06.094 [2024-07-24 20:08:53.833835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.094 [2024-07-24 20:08:53.833842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.094 qpair failed and we were unable to recover it. 00:29:06.094 [2024-07-24 20:08:53.834266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.094 [2024-07-24 20:08:53.834273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.094 qpair failed and we were unable to recover it. 00:29:06.094 [2024-07-24 20:08:53.834686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.094 [2024-07-24 20:08:53.834693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.094 qpair failed and we were unable to recover it. 00:29:06.094 [2024-07-24 20:08:53.835144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.094 [2024-07-24 20:08:53.835150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.094 qpair failed and we were unable to recover it. 00:29:06.094 [2024-07-24 20:08:53.835579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.094 [2024-07-24 20:08:53.835587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.094 qpair failed and we were unable to recover it. 00:29:06.094 [2024-07-24 20:08:53.835866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.094 [2024-07-24 20:08:53.835880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.094 qpair failed and we were unable to recover it. 00:29:06.094 [2024-07-24 20:08:53.836304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.094 [2024-07-24 20:08:53.836311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.094 qpair failed and we were unable to recover it. 00:29:06.094 [2024-07-24 20:08:53.836715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.094 [2024-07-24 20:08:53.836722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.094 qpair failed and we were unable to recover it. 00:29:06.094 [2024-07-24 20:08:53.837002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.094 [2024-07-24 20:08:53.837010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.094 qpair failed and we were unable to recover it. 00:29:06.094 [2024-07-24 20:08:53.837432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.094 [2024-07-24 20:08:53.837439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.094 qpair failed and we were unable to recover it. 00:29:06.094 [2024-07-24 20:08:53.837842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.094 [2024-07-24 20:08:53.837848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.094 qpair failed and we were unable to recover it. 00:29:06.094 [2024-07-24 20:08:53.838340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.094 [2024-07-24 20:08:53.838347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.094 qpair failed and we were unable to recover it. 00:29:06.094 [2024-07-24 20:08:53.838768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.094 [2024-07-24 20:08:53.838774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.094 qpair failed and we were unable to recover it. 00:29:06.094 [2024-07-24 20:08:53.839107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.094 [2024-07-24 20:08:53.839113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.094 qpair failed and we were unable to recover it. 00:29:06.094 [2024-07-24 20:08:53.839346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.094 [2024-07-24 20:08:53.839354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.094 qpair failed and we were unable to recover it. 00:29:06.094 [2024-07-24 20:08:53.839793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.094 [2024-07-24 20:08:53.839799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.094 qpair failed and we were unable to recover it. 00:29:06.094 [2024-07-24 20:08:53.840206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.095 [2024-07-24 20:08:53.840214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.095 qpair failed and we were unable to recover it. 00:29:06.095 [2024-07-24 20:08:53.840536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.095 [2024-07-24 20:08:53.840543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.095 qpair failed and we were unable to recover it. 00:29:06.095 [2024-07-24 20:08:53.840858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.095 [2024-07-24 20:08:53.840865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.095 qpair failed and we were unable to recover it. 00:29:06.095 [2024-07-24 20:08:53.841313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.095 [2024-07-24 20:08:53.841320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.095 qpair failed and we were unable to recover it. 00:29:06.095 [2024-07-24 20:08:53.841729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.095 [2024-07-24 20:08:53.841735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.095 qpair failed and we were unable to recover it. 00:29:06.095 [2024-07-24 20:08:53.842137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.095 [2024-07-24 20:08:53.842144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.095 qpair failed and we were unable to recover it. 00:29:06.095 [2024-07-24 20:08:53.842610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.095 [2024-07-24 20:08:53.842617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.095 qpair failed and we were unable to recover it. 00:29:06.095 [2024-07-24 20:08:53.843023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.095 [2024-07-24 20:08:53.843030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.095 qpair failed and we were unable to recover it. 00:29:06.095 [2024-07-24 20:08:53.843531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.095 [2024-07-24 20:08:53.843559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.095 qpair failed and we were unable to recover it. 00:29:06.095 [2024-07-24 20:08:53.843974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.095 [2024-07-24 20:08:53.843982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.095 qpair failed and we were unable to recover it. 00:29:06.095 [2024-07-24 20:08:53.844404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.095 [2024-07-24 20:08:53.844432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.095 qpair failed and we were unable to recover it. 00:29:06.095 [2024-07-24 20:08:53.844849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.095 [2024-07-24 20:08:53.844861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.095 qpair failed and we were unable to recover it. 00:29:06.095 [2024-07-24 20:08:53.845189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.095 [2024-07-24 20:08:53.845196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.095 qpair failed and we were unable to recover it. 00:29:06.095 [2024-07-24 20:08:53.845646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.095 [2024-07-24 20:08:53.845654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.095 qpair failed and we were unable to recover it. 00:29:06.095 [2024-07-24 20:08:53.846074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.095 [2024-07-24 20:08:53.846081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.095 qpair failed and we were unable to recover it. 00:29:06.095 [2024-07-24 20:08:53.846615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.095 [2024-07-24 20:08:53.846643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.095 qpair failed and we were unable to recover it. 00:29:06.095 [2024-07-24 20:08:53.847049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.095 [2024-07-24 20:08:53.847057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.095 qpair failed and we were unable to recover it. 00:29:06.095 [2024-07-24 20:08:53.847565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.095 [2024-07-24 20:08:53.847592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.095 qpair failed and we were unable to recover it. 00:29:06.095 [2024-07-24 20:08:53.847857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.095 [2024-07-24 20:08:53.847866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.095 qpair failed and we were unable to recover it. 00:29:06.095 [2024-07-24 20:08:53.848408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.095 [2024-07-24 20:08:53.848436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.095 qpair failed and we were unable to recover it. 00:29:06.095 [2024-07-24 20:08:53.848853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.095 [2024-07-24 20:08:53.848862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.095 qpair failed and we were unable to recover it. 00:29:06.095 [2024-07-24 20:08:53.849178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.095 [2024-07-24 20:08:53.849186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.095 qpair failed and we were unable to recover it. 00:29:06.095 [2024-07-24 20:08:53.849605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.095 [2024-07-24 20:08:53.849612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.095 qpair failed and we were unable to recover it. 00:29:06.095 [2024-07-24 20:08:53.850052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.095 [2024-07-24 20:08:53.850059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.095 qpair failed and we were unable to recover it. 00:29:06.095 [2024-07-24 20:08:53.850576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.095 [2024-07-24 20:08:53.850603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.095 qpair failed and we were unable to recover it. 00:29:06.095 [2024-07-24 20:08:53.851020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.095 [2024-07-24 20:08:53.851028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.095 qpair failed and we were unable to recover it. 00:29:06.095 [2024-07-24 20:08:53.851474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.095 [2024-07-24 20:08:53.851502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.095 qpair failed and we were unable to recover it. 00:29:06.095 [2024-07-24 20:08:53.851916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.095 [2024-07-24 20:08:53.851925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.095 qpair failed and we were unable to recover it. 00:29:06.095 [2024-07-24 20:08:53.852445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.095 [2024-07-24 20:08:53.852472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.095 qpair failed and we were unable to recover it. 00:29:06.095 [2024-07-24 20:08:53.852907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.095 [2024-07-24 20:08:53.852915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.095 qpair failed and we were unable to recover it. 00:29:06.095 [2024-07-24 20:08:53.853428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.095 [2024-07-24 20:08:53.853456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.095 qpair failed and we were unable to recover it. 00:29:06.095 [2024-07-24 20:08:53.853900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.095 [2024-07-24 20:08:53.853908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.095 qpair failed and we were unable to recover it. 00:29:06.095 [2024-07-24 20:08:53.854454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.095 [2024-07-24 20:08:53.854482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.095 qpair failed and we were unable to recover it. 00:29:06.095 [2024-07-24 20:08:53.854939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.095 [2024-07-24 20:08:53.854947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.095 qpair failed and we were unable to recover it. 00:29:06.095 [2024-07-24 20:08:53.855441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.095 [2024-07-24 20:08:53.855468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.095 qpair failed and we were unable to recover it. 00:29:06.095 [2024-07-24 20:08:53.855886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.095 [2024-07-24 20:08:53.855895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.095 qpair failed and we were unable to recover it. 00:29:06.095 [2024-07-24 20:08:53.856345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.095 [2024-07-24 20:08:53.856352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.095 qpair failed and we were unable to recover it. 00:29:06.095 [2024-07-24 20:08:53.856774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.096 [2024-07-24 20:08:53.856781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.096 qpair failed and we were unable to recover it. 00:29:06.096 [2024-07-24 20:08:53.857108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.096 [2024-07-24 20:08:53.857115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.096 qpair failed and we were unable to recover it. 00:29:06.096 [2024-07-24 20:08:53.857587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.096 [2024-07-24 20:08:53.857594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.096 qpair failed and we were unable to recover it. 00:29:06.096 [2024-07-24 20:08:53.858043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.096 [2024-07-24 20:08:53.858050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.096 qpair failed and we were unable to recover it. 00:29:06.096 [2024-07-24 20:08:53.858566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.096 [2024-07-24 20:08:53.858592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.096 qpair failed and we were unable to recover it. 00:29:06.096 [2024-07-24 20:08:53.858912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.096 [2024-07-24 20:08:53.858921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.096 qpair failed and we were unable to recover it. 00:29:06.096 [2024-07-24 20:08:53.859361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.096 [2024-07-24 20:08:53.859369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.096 qpair failed and we were unable to recover it. 00:29:06.096 [2024-07-24 20:08:53.859775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.096 [2024-07-24 20:08:53.859781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.096 qpair failed and we were unable to recover it. 00:29:06.096 [2024-07-24 20:08:53.860184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.096 [2024-07-24 20:08:53.860191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.096 qpair failed and we were unable to recover it. 00:29:06.096 [2024-07-24 20:08:53.860603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.096 [2024-07-24 20:08:53.860610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.096 qpair failed and we were unable to recover it. 00:29:06.096 [2024-07-24 20:08:53.861056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.096 [2024-07-24 20:08:53.861063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.096 qpair failed and we were unable to recover it. 00:29:06.096 [2024-07-24 20:08:53.861568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.096 [2024-07-24 20:08:53.861596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.096 qpair failed and we were unable to recover it. 00:29:06.096 [2024-07-24 20:08:53.862011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.096 [2024-07-24 20:08:53.862019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.096 qpair failed and we were unable to recover it. 00:29:06.096 [2024-07-24 20:08:53.862527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.096 [2024-07-24 20:08:53.862555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.096 qpair failed and we were unable to recover it. 00:29:06.096 [2024-07-24 20:08:53.863000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.096 [2024-07-24 20:08:53.863009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.096 qpair failed and we were unable to recover it. 00:29:06.096 [2024-07-24 20:08:53.863519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.096 [2024-07-24 20:08:53.863547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.096 qpair failed and we were unable to recover it. 00:29:06.096 [2024-07-24 20:08:53.863968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.096 [2024-07-24 20:08:53.863976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.096 qpair failed and we were unable to recover it. 00:29:06.096 [2024-07-24 20:08:53.864500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.096 [2024-07-24 20:08:53.864528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.096 qpair failed and we were unable to recover it. 00:29:06.096 [2024-07-24 20:08:53.864945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.096 [2024-07-24 20:08:53.864954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.096 qpair failed and we were unable to recover it. 00:29:06.096 [2024-07-24 20:08:53.865473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.096 [2024-07-24 20:08:53.865500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.096 qpair failed and we were unable to recover it. 00:29:06.096 [2024-07-24 20:08:53.865921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.096 [2024-07-24 20:08:53.865930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.096 qpair failed and we were unable to recover it. 00:29:06.096 [2024-07-24 20:08:53.866493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.096 [2024-07-24 20:08:53.866521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.096 qpair failed and we were unable to recover it. 00:29:06.096 [2024-07-24 20:08:53.866970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.096 [2024-07-24 20:08:53.866978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.096 qpair failed and we were unable to recover it. 00:29:06.096 [2024-07-24 20:08:53.867431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.096 [2024-07-24 20:08:53.867459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.096 qpair failed and we were unable to recover it. 00:29:06.096 [2024-07-24 20:08:53.867896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.096 [2024-07-24 20:08:53.867905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.096 qpair failed and we were unable to recover it. 00:29:06.096 [2024-07-24 20:08:53.868447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.096 [2024-07-24 20:08:53.868474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.096 qpair failed and we were unable to recover it. 00:29:06.096 [2024-07-24 20:08:53.868919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.096 [2024-07-24 20:08:53.868928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.096 qpair failed and we were unable to recover it. 00:29:06.096 [2024-07-24 20:08:53.869460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.096 [2024-07-24 20:08:53.869488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.096 qpair failed and we were unable to recover it. 00:29:06.096 [2024-07-24 20:08:53.869810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.096 [2024-07-24 20:08:53.869818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.096 qpair failed and we were unable to recover it. 00:29:06.096 [2024-07-24 20:08:53.870250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.096 [2024-07-24 20:08:53.870257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.096 qpair failed and we were unable to recover it. 00:29:06.096 [2024-07-24 20:08:53.870679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.096 [2024-07-24 20:08:53.870686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.096 qpair failed and we were unable to recover it. 00:29:06.096 [2024-07-24 20:08:53.871088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.096 [2024-07-24 20:08:53.871095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.096 qpair failed and we were unable to recover it. 00:29:06.096 [2024-07-24 20:08:53.871369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.096 [2024-07-24 20:08:53.871377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.096 qpair failed and we were unable to recover it. 00:29:06.096 [2024-07-24 20:08:53.871790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.096 [2024-07-24 20:08:53.871797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.096 qpair failed and we were unable to recover it. 00:29:06.096 [2024-07-24 20:08:53.872204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.096 [2024-07-24 20:08:53.872211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.096 qpair failed and we were unable to recover it. 00:29:06.096 [2024-07-24 20:08:53.872528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.096 [2024-07-24 20:08:53.872534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.096 qpair failed and we were unable to recover it. 00:29:06.096 [2024-07-24 20:08:53.872949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.096 [2024-07-24 20:08:53.872956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.096 qpair failed and we were unable to recover it. 00:29:06.096 [2024-07-24 20:08:53.873440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.096 [2024-07-24 20:08:53.873447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.097 qpair failed and we were unable to recover it. 00:29:06.097 [2024-07-24 20:08:53.873910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.097 [2024-07-24 20:08:53.873917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.097 qpair failed and we were unable to recover it. 00:29:06.097 [2024-07-24 20:08:53.874473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.097 [2024-07-24 20:08:53.874501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.097 qpair failed and we were unable to recover it. 00:29:06.097 [2024-07-24 20:08:53.874930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.097 [2024-07-24 20:08:53.874939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.097 qpair failed and we were unable to recover it. 00:29:06.097 [2024-07-24 20:08:53.875364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.097 [2024-07-24 20:08:53.875377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.097 qpair failed and we were unable to recover it. 00:29:06.097 [2024-07-24 20:08:53.875790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.097 [2024-07-24 20:08:53.875797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.097 qpair failed and we were unable to recover it. 00:29:06.097 [2024-07-24 20:08:53.876226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.097 [2024-07-24 20:08:53.876233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.097 qpair failed and we were unable to recover it. 00:29:06.097 [2024-07-24 20:08:53.876497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.097 [2024-07-24 20:08:53.876508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.097 qpair failed and we were unable to recover it. 00:29:06.097 [2024-07-24 20:08:53.876930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.097 [2024-07-24 20:08:53.876937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.097 qpair failed and we were unable to recover it. 00:29:06.097 [2024-07-24 20:08:53.877249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.097 [2024-07-24 20:08:53.877257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.097 qpair failed and we were unable to recover it. 00:29:06.097 [2024-07-24 20:08:53.877473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.097 [2024-07-24 20:08:53.877481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.097 qpair failed and we were unable to recover it. 00:29:06.097 [2024-07-24 20:08:53.877869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.097 [2024-07-24 20:08:53.877876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.097 qpair failed and we were unable to recover it. 00:29:06.097 [2024-07-24 20:08:53.878295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.097 [2024-07-24 20:08:53.878302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.097 qpair failed and we were unable to recover it. 00:29:06.097 [2024-07-24 20:08:53.878731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.097 [2024-07-24 20:08:53.878739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.097 qpair failed and we were unable to recover it. 00:29:06.097 [2024-07-24 20:08:53.879162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.097 [2024-07-24 20:08:53.879170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.097 qpair failed and we were unable to recover it. 00:29:06.097 [2024-07-24 20:08:53.879605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.097 [2024-07-24 20:08:53.879612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.097 qpair failed and we were unable to recover it. 00:29:06.097 [2024-07-24 20:08:53.880037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.097 [2024-07-24 20:08:53.880044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.097 qpair failed and we were unable to recover it. 00:29:06.097 [2024-07-24 20:08:53.880470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.097 [2024-07-24 20:08:53.880478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.097 qpair failed and we were unable to recover it. 00:29:06.097 [2024-07-24 20:08:53.880901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.097 [2024-07-24 20:08:53.880908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.097 qpair failed and we were unable to recover it. 00:29:06.097 [2024-07-24 20:08:53.881438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.097 [2024-07-24 20:08:53.881466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.097 qpair failed and we were unable to recover it. 00:29:06.097 [2024-07-24 20:08:53.881895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.097 [2024-07-24 20:08:53.881904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.097 qpair failed and we were unable to recover it. 00:29:06.097 [2024-07-24 20:08:53.882348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.097 [2024-07-24 20:08:53.882356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.097 qpair failed and we were unable to recover it. 00:29:06.097 [2024-07-24 20:08:53.882759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.097 [2024-07-24 20:08:53.882766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.097 qpair failed and we were unable to recover it. 00:29:06.097 [2024-07-24 20:08:53.883235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.097 [2024-07-24 20:08:53.883243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.097 qpair failed and we were unable to recover it. 00:29:06.097 [2024-07-24 20:08:53.883658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.097 [2024-07-24 20:08:53.883665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.097 qpair failed and we were unable to recover it. 00:29:06.097 [2024-07-24 20:08:53.884111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.097 [2024-07-24 20:08:53.884118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.097 qpair failed and we were unable to recover it. 00:29:06.097 [2024-07-24 20:08:53.884519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.097 [2024-07-24 20:08:53.884527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.097 qpair failed and we were unable to recover it. 00:29:06.097 [2024-07-24 20:08:53.884948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.097 [2024-07-24 20:08:53.884956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.097 qpair failed and we were unable to recover it. 00:29:06.097 [2024-07-24 20:08:53.885376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.097 [2024-07-24 20:08:53.885383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.097 qpair failed and we were unable to recover it. 00:29:06.097 [2024-07-24 20:08:53.885856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.097 [2024-07-24 20:08:53.885863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.097 qpair failed and we were unable to recover it. 00:29:06.097 [2024-07-24 20:08:53.886272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.097 [2024-07-24 20:08:53.886280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.097 qpair failed and we were unable to recover it. 00:29:06.097 [2024-07-24 20:08:53.886699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.097 [2024-07-24 20:08:53.886706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.097 qpair failed and we were unable to recover it. 00:29:06.097 [2024-07-24 20:08:53.887065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.097 [2024-07-24 20:08:53.887073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.097 qpair failed and we were unable to recover it. 00:29:06.097 [2024-07-24 20:08:53.887486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.097 [2024-07-24 20:08:53.887494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.097 qpair failed and we were unable to recover it. 00:29:06.097 [2024-07-24 20:08:53.887912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.097 [2024-07-24 20:08:53.887920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.097 qpair failed and we were unable to recover it. 00:29:06.097 [2024-07-24 20:08:53.888460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.097 [2024-07-24 20:08:53.888488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.097 qpair failed and we were unable to recover it. 00:29:06.097 [2024-07-24 20:08:53.888927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.097 [2024-07-24 20:08:53.888936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.097 qpair failed and we were unable to recover it. 00:29:06.097 [2024-07-24 20:08:53.889470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.097 [2024-07-24 20:08:53.889498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.097 qpair failed and we were unable to recover it. 00:29:06.098 [2024-07-24 20:08:53.889829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.098 [2024-07-24 20:08:53.889839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.098 qpair failed and we were unable to recover it. 00:29:06.098 [2024-07-24 20:08:53.890308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.098 [2024-07-24 20:08:53.890316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.098 qpair failed and we were unable to recover it. 00:29:06.098 [2024-07-24 20:08:53.890744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.098 [2024-07-24 20:08:53.890751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.098 qpair failed and we were unable to recover it. 00:29:06.098 [2024-07-24 20:08:53.891207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.098 [2024-07-24 20:08:53.891214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.098 qpair failed and we were unable to recover it. 00:29:06.098 [2024-07-24 20:08:53.891648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.098 [2024-07-24 20:08:53.891656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.098 qpair failed and we were unable to recover it. 00:29:06.098 [2024-07-24 20:08:53.892083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.098 [2024-07-24 20:08:53.892090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.098 qpair failed and we were unable to recover it. 00:29:06.098 [2024-07-24 20:08:53.892534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.098 [2024-07-24 20:08:53.892545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.098 qpair failed and we were unable to recover it. 00:29:06.098 [2024-07-24 20:08:53.892987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.098 [2024-07-24 20:08:53.892994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.098 qpair failed and we were unable to recover it. 00:29:06.098 [2024-07-24 20:08:53.893508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.098 [2024-07-24 20:08:53.893536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.098 qpair failed and we were unable to recover it. 00:29:06.098 [2024-07-24 20:08:53.893972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.098 [2024-07-24 20:08:53.893981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.098 qpair failed and we were unable to recover it. 00:29:06.098 [2024-07-24 20:08:53.894511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.098 [2024-07-24 20:08:53.894539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.098 qpair failed and we were unable to recover it. 00:29:06.098 [2024-07-24 20:08:53.894985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.098 [2024-07-24 20:08:53.894994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.098 qpair failed and we were unable to recover it. 00:29:06.098 [2024-07-24 20:08:53.895403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.098 [2024-07-24 20:08:53.895431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.098 qpair failed and we were unable to recover it. 00:29:06.098 [2024-07-24 20:08:53.895854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.098 [2024-07-24 20:08:53.895863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.098 qpair failed and we were unable to recover it. 00:29:06.098 [2024-07-24 20:08:53.896283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.098 [2024-07-24 20:08:53.896291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.098 qpair failed and we were unable to recover it. 00:29:06.098 [2024-07-24 20:08:53.896750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.098 [2024-07-24 20:08:53.896757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.098 qpair failed and we were unable to recover it. 00:29:06.098 [2024-07-24 20:08:53.897156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.098 [2024-07-24 20:08:53.897162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.098 qpair failed and we were unable to recover it. 00:29:06.098 [2024-07-24 20:08:53.897557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.098 [2024-07-24 20:08:53.897564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.098 qpair failed and we were unable to recover it. 00:29:06.098 [2024-07-24 20:08:53.897961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.098 [2024-07-24 20:08:53.897969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.098 qpair failed and we were unable to recover it. 00:29:06.098 [2024-07-24 20:08:53.898502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.098 [2024-07-24 20:08:53.898530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.098 qpair failed and we were unable to recover it. 00:29:06.098 [2024-07-24 20:08:53.898943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.098 [2024-07-24 20:08:53.898952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.098 qpair failed and we were unable to recover it. 00:29:06.098 [2024-07-24 20:08:53.899409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.098 [2024-07-24 20:08:53.899436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.098 qpair failed and we were unable to recover it. 00:29:06.098 [2024-07-24 20:08:53.899866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.098 [2024-07-24 20:08:53.899874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.098 qpair failed and we were unable to recover it. 00:29:06.098 [2024-07-24 20:08:53.900287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.098 [2024-07-24 20:08:53.900295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.098 qpair failed and we were unable to recover it. 00:29:06.098 [2024-07-24 20:08:53.900696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.098 [2024-07-24 20:08:53.900703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.098 qpair failed and we were unable to recover it. 00:29:06.098 [2024-07-24 20:08:53.901100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.098 [2024-07-24 20:08:53.901106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.098 qpair failed and we were unable to recover it. 00:29:06.098 [2024-07-24 20:08:53.901527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.098 [2024-07-24 20:08:53.901534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.098 qpair failed and we were unable to recover it. 00:29:06.098 [2024-07-24 20:08:53.901980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.098 [2024-07-24 20:08:53.901987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.098 qpair failed and we were unable to recover it. 00:29:06.098 [2024-07-24 20:08:53.902322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.098 [2024-07-24 20:08:53.902329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.098 qpair failed and we were unable to recover it. 00:29:06.098 [2024-07-24 20:08:53.902617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.098 [2024-07-24 20:08:53.902625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.098 qpair failed and we were unable to recover it. 00:29:06.098 [2024-07-24 20:08:53.903048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.098 [2024-07-24 20:08:53.903054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.098 qpair failed and we were unable to recover it. 00:29:06.098 [2024-07-24 20:08:53.903456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.099 [2024-07-24 20:08:53.903463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.099 qpair failed and we were unable to recover it. 00:29:06.099 [2024-07-24 20:08:53.903881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.099 [2024-07-24 20:08:53.903888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.099 qpair failed and we were unable to recover it. 00:29:06.099 [2024-07-24 20:08:53.904288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.099 [2024-07-24 20:08:53.904295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.099 qpair failed and we were unable to recover it. 00:29:06.099 [2024-07-24 20:08:53.904703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.099 [2024-07-24 20:08:53.904709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.099 qpair failed and we were unable to recover it. 00:29:06.099 [2024-07-24 20:08:53.905111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.099 [2024-07-24 20:08:53.905117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.099 qpair failed and we were unable to recover it. 00:29:06.099 [2024-07-24 20:08:53.905535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.099 [2024-07-24 20:08:53.905542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.099 qpair failed and we were unable to recover it. 00:29:06.099 [2024-07-24 20:08:53.905948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.099 [2024-07-24 20:08:53.905955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.099 qpair failed and we were unable to recover it. 00:29:06.099 [2024-07-24 20:08:53.906378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.099 [2024-07-24 20:08:53.906386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.099 qpair failed and we were unable to recover it. 00:29:06.099 [2024-07-24 20:08:53.906805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.099 [2024-07-24 20:08:53.906812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.099 qpair failed and we were unable to recover it. 00:29:06.099 [2024-07-24 20:08:53.907232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.099 [2024-07-24 20:08:53.907238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.099 qpair failed and we were unable to recover it. 00:29:06.099 [2024-07-24 20:08:53.907654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.099 [2024-07-24 20:08:53.907661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.099 qpair failed and we were unable to recover it. 00:29:06.099 [2024-07-24 20:08:53.908065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.099 [2024-07-24 20:08:53.908072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.099 qpair failed and we were unable to recover it. 00:29:06.099 [2024-07-24 20:08:53.908473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.099 [2024-07-24 20:08:53.908480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.099 qpair failed and we were unable to recover it. 00:29:06.099 [2024-07-24 20:08:53.908921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.099 [2024-07-24 20:08:53.908927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.099 qpair failed and we were unable to recover it. 00:29:06.099 [2024-07-24 20:08:53.909430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.099 [2024-07-24 20:08:53.909457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.099 qpair failed and we were unable to recover it. 00:29:06.099 [2024-07-24 20:08:53.909906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.099 [2024-07-24 20:08:53.909918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.099 qpair failed and we were unable to recover it. 00:29:06.099 [2024-07-24 20:08:53.910441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.099 [2024-07-24 20:08:53.910469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.099 qpair failed and we were unable to recover it. 00:29:06.099 [2024-07-24 20:08:53.910888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.099 [2024-07-24 20:08:53.910897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.099 qpair failed and we were unable to recover it. 00:29:06.099 [2024-07-24 20:08:53.911328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.099 [2024-07-24 20:08:53.911335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.099 qpair failed and we were unable to recover it. 00:29:06.099 [2024-07-24 20:08:53.911776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.099 [2024-07-24 20:08:53.911784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.099 qpair failed and we were unable to recover it. 00:29:06.099 [2024-07-24 20:08:53.912232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.099 [2024-07-24 20:08:53.912239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.099 qpair failed and we were unable to recover it. 00:29:06.099 [2024-07-24 20:08:53.912630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.099 [2024-07-24 20:08:53.912636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.099 qpair failed and we were unable to recover it. 00:29:06.099 [2024-07-24 20:08:53.913034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.099 [2024-07-24 20:08:53.913041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.099 qpair failed and we were unable to recover it. 00:29:06.099 [2024-07-24 20:08:53.913439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.099 [2024-07-24 20:08:53.913446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.099 qpair failed and we were unable to recover it. 00:29:06.099 [2024-07-24 20:08:53.913860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.099 [2024-07-24 20:08:53.913866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.099 qpair failed and we were unable to recover it. 00:29:06.099 [2024-07-24 20:08:53.914266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.099 [2024-07-24 20:08:53.914273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.099 qpair failed and we were unable to recover it. 00:29:06.099 [2024-07-24 20:08:53.914717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.099 [2024-07-24 20:08:53.914724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.099 qpair failed and we were unable to recover it. 00:29:06.099 [2024-07-24 20:08:53.915164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.099 [2024-07-24 20:08:53.915170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.099 qpair failed and we were unable to recover it. 00:29:06.099 [2024-07-24 20:08:53.915482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.099 [2024-07-24 20:08:53.915489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.099 qpair failed and we were unable to recover it. 00:29:06.099 [2024-07-24 20:08:53.915924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.099 [2024-07-24 20:08:53.915931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.099 qpair failed and we were unable to recover it. 00:29:06.099 [2024-07-24 20:08:53.916336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.099 [2024-07-24 20:08:53.916344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.099 qpair failed and we were unable to recover it. 00:29:06.099 [2024-07-24 20:08:53.916676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.099 [2024-07-24 20:08:53.916683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.099 qpair failed and we were unable to recover it. 00:29:06.099 [2024-07-24 20:08:53.916994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.099 [2024-07-24 20:08:53.917001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.099 qpair failed and we were unable to recover it. 00:29:06.099 [2024-07-24 20:08:53.917463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.099 [2024-07-24 20:08:53.917470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.099 qpair failed and we were unable to recover it. 00:29:06.099 [2024-07-24 20:08:53.917892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.099 [2024-07-24 20:08:53.917898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.099 qpair failed and we were unable to recover it. 00:29:06.099 [2024-07-24 20:08:53.918306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.099 [2024-07-24 20:08:53.918312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.099 qpair failed and we were unable to recover it. 00:29:06.099 [2024-07-24 20:08:53.918751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.099 [2024-07-24 20:08:53.918758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.099 qpair failed and we were unable to recover it. 00:29:06.099 [2024-07-24 20:08:53.918984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.100 [2024-07-24 20:08:53.918990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.100 qpair failed and we were unable to recover it. 00:29:06.100 [2024-07-24 20:08:53.919365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.100 [2024-07-24 20:08:53.919372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.100 qpair failed and we were unable to recover it. 00:29:06.100 [2024-07-24 20:08:53.919810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.100 [2024-07-24 20:08:53.919816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.100 qpair failed and we were unable to recover it. 00:29:06.100 [2024-07-24 20:08:53.920198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.100 [2024-07-24 20:08:53.920220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.100 qpair failed and we were unable to recover it. 00:29:06.100 [2024-07-24 20:08:53.920678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.100 [2024-07-24 20:08:53.920685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.100 qpair failed and we were unable to recover it. 00:29:06.100 [2024-07-24 20:08:53.921124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.100 [2024-07-24 20:08:53.921132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.100 qpair failed and we were unable to recover it. 00:29:06.100 [2024-07-24 20:08:53.921564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.100 [2024-07-24 20:08:53.921572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.100 qpair failed and we were unable to recover it. 00:29:06.100 [2024-07-24 20:08:53.922014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.100 [2024-07-24 20:08:53.922021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.100 qpair failed and we were unable to recover it. 00:29:06.100 [2024-07-24 20:08:53.922537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.100 [2024-07-24 20:08:53.922565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.100 qpair failed and we were unable to recover it. 00:29:06.100 [2024-07-24 20:08:53.922978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.100 [2024-07-24 20:08:53.922986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.100 qpair failed and we were unable to recover it. 00:29:06.100 [2024-07-24 20:08:53.923525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.100 [2024-07-24 20:08:53.923552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.100 qpair failed and we were unable to recover it. 00:29:06.100 [2024-07-24 20:08:53.924035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.100 [2024-07-24 20:08:53.924043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.100 qpair failed and we were unable to recover it. 00:29:06.100 [2024-07-24 20:08:53.924556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.100 [2024-07-24 20:08:53.924583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.100 qpair failed and we were unable to recover it. 00:29:06.100 [2024-07-24 20:08:53.925000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.100 [2024-07-24 20:08:53.925009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.100 qpair failed and we were unable to recover it. 00:29:06.100 [2024-07-24 20:08:53.925521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.100 [2024-07-24 20:08:53.925548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.100 qpair failed and we were unable to recover it. 00:29:06.100 [2024-07-24 20:08:53.925763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.100 [2024-07-24 20:08:53.925773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.100 qpair failed and we were unable to recover it. 00:29:06.100 [2024-07-24 20:08:53.926168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.100 [2024-07-24 20:08:53.926176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.100 qpair failed and we were unable to recover it. 00:29:06.100 [2024-07-24 20:08:53.926620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.100 [2024-07-24 20:08:53.926627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.100 qpair failed and we were unable to recover it. 00:29:06.100 [2024-07-24 20:08:53.926904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.100 [2024-07-24 20:08:53.926916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.100 qpair failed and we were unable to recover it. 00:29:06.100 [2024-07-24 20:08:53.927358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.100 [2024-07-24 20:08:53.927365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.100 qpair failed and we were unable to recover it. 00:29:06.100 [2024-07-24 20:08:53.927751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.100 [2024-07-24 20:08:53.927758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.100 qpair failed and we were unable to recover it. 00:29:06.100 [2024-07-24 20:08:53.928156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.100 [2024-07-24 20:08:53.928162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.100 qpair failed and we were unable to recover it. 00:29:06.100 [2024-07-24 20:08:53.928573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.100 [2024-07-24 20:08:53.928579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.100 qpair failed and we were unable to recover it. 00:29:06.100 [2024-07-24 20:08:53.929021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.100 [2024-07-24 20:08:53.929027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.100 qpair failed and we were unable to recover it. 00:29:06.100 [2024-07-24 20:08:53.929527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.100 [2024-07-24 20:08:53.929555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.100 qpair failed and we were unable to recover it. 00:29:06.100 [2024-07-24 20:08:53.929971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.100 [2024-07-24 20:08:53.929980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.100 qpair failed and we were unable to recover it. 00:29:06.100 [2024-07-24 20:08:53.930605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.100 [2024-07-24 20:08:53.930633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.100 qpair failed and we were unable to recover it. 00:29:06.100 [2024-07-24 20:08:53.931051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.100 [2024-07-24 20:08:53.931059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.100 qpair failed and we were unable to recover it. 00:29:06.100 [2024-07-24 20:08:53.931500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.100 [2024-07-24 20:08:53.931528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.100 qpair failed and we were unable to recover it. 00:29:06.100 [2024-07-24 20:08:53.931840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.100 [2024-07-24 20:08:53.931849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.100 qpair failed and we were unable to recover it. 00:29:06.100 [2024-07-24 20:08:53.932395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.100 [2024-07-24 20:08:53.932422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.100 qpair failed and we were unable to recover it. 00:29:06.100 [2024-07-24 20:08:53.932876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.100 [2024-07-24 20:08:53.932884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.100 qpair failed and we were unable to recover it. 00:29:06.100 [2024-07-24 20:08:53.933198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.100 [2024-07-24 20:08:53.933211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.100 qpair failed and we were unable to recover it. 00:29:06.100 [2024-07-24 20:08:53.933624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.100 [2024-07-24 20:08:53.933631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.100 qpair failed and we were unable to recover it. 00:29:06.100 [2024-07-24 20:08:53.934126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.100 [2024-07-24 20:08:53.934133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.100 qpair failed and we were unable to recover it. 00:29:06.100 [2024-07-24 20:08:53.934676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.100 [2024-07-24 20:08:53.934703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.100 qpair failed and we were unable to recover it. 00:29:06.100 [2024-07-24 20:08:53.935119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.100 [2024-07-24 20:08:53.935127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.100 qpair failed and we were unable to recover it. 00:29:06.100 [2024-07-24 20:08:53.935538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.100 [2024-07-24 20:08:53.935545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.101 qpair failed and we were unable to recover it. 00:29:06.101 [2024-07-24 20:08:53.935950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.101 [2024-07-24 20:08:53.935957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.101 qpair failed and we were unable to recover it. 00:29:06.101 [2024-07-24 20:08:53.936463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.101 [2024-07-24 20:08:53.936491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.101 qpair failed and we were unable to recover it. 00:29:06.101 [2024-07-24 20:08:53.936907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.101 [2024-07-24 20:08:53.936915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.101 qpair failed and we were unable to recover it. 00:29:06.101 [2024-07-24 20:08:53.937490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.101 [2024-07-24 20:08:53.937517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.101 qpair failed and we were unable to recover it. 00:29:06.101 [2024-07-24 20:08:53.937927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.101 [2024-07-24 20:08:53.937936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.101 qpair failed and we were unable to recover it. 00:29:06.101 [2024-07-24 20:08:53.938475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.101 [2024-07-24 20:08:53.938502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.101 qpair failed and we were unable to recover it. 00:29:06.101 [2024-07-24 20:08:53.938917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.101 [2024-07-24 20:08:53.938926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.101 qpair failed and we were unable to recover it. 00:29:06.101 [2024-07-24 20:08:53.939329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.101 [2024-07-24 20:08:53.939337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.101 qpair failed and we were unable to recover it. 00:29:06.101 [2024-07-24 20:08:53.939792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.101 [2024-07-24 20:08:53.939799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.101 qpair failed and we were unable to recover it. 00:29:06.101 [2024-07-24 20:08:53.940205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.101 [2024-07-24 20:08:53.940212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.101 qpair failed and we were unable to recover it. 00:29:06.101 [2024-07-24 20:08:53.940630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.101 [2024-07-24 20:08:53.940637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.101 qpair failed and we were unable to recover it. 00:29:06.101 [2024-07-24 20:08:53.940985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.101 [2024-07-24 20:08:53.940992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.101 qpair failed and we were unable to recover it. 00:29:06.101 [2024-07-24 20:08:53.941503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.101 [2024-07-24 20:08:53.941530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.101 qpair failed and we were unable to recover it. 00:29:06.101 [2024-07-24 20:08:53.941945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.101 [2024-07-24 20:08:53.941953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.101 qpair failed and we were unable to recover it. 00:29:06.101 [2024-07-24 20:08:53.942478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.101 [2024-07-24 20:08:53.942507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.101 qpair failed and we were unable to recover it. 00:29:06.101 [2024-07-24 20:08:53.942937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.101 [2024-07-24 20:08:53.942945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.101 qpair failed and we were unable to recover it. 00:29:06.101 [2024-07-24 20:08:53.943465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.101 [2024-07-24 20:08:53.943492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.101 qpair failed and we were unable to recover it. 00:29:06.101 [2024-07-24 20:08:53.943946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.101 [2024-07-24 20:08:53.943954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.101 qpair failed and we were unable to recover it. 00:29:06.101 [2024-07-24 20:08:53.944463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.101 [2024-07-24 20:08:53.944490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.101 qpair failed and we were unable to recover it. 00:29:06.101 [2024-07-24 20:08:53.944909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.101 [2024-07-24 20:08:53.944917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.101 qpair failed and we were unable to recover it. 00:29:06.101 [2024-07-24 20:08:53.945420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.101 [2024-07-24 20:08:53.945450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.101 qpair failed and we were unable to recover it. 00:29:06.101 [2024-07-24 20:08:53.945658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.101 [2024-07-24 20:08:53.945668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.101 qpair failed and we were unable to recover it. 00:29:06.101 [2024-07-24 20:08:53.946053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.101 [2024-07-24 20:08:53.946061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.101 qpair failed and we were unable to recover it. 00:29:06.101 [2024-07-24 20:08:53.946488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.101 [2024-07-24 20:08:53.946496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.101 qpair failed and we were unable to recover it. 00:29:06.101 [2024-07-24 20:08:53.946900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.101 [2024-07-24 20:08:53.946907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.101 qpair failed and we were unable to recover it. 00:29:06.101 [2024-07-24 20:08:53.947304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.101 [2024-07-24 20:08:53.947311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.101 qpair failed and we were unable to recover it. 00:29:06.101 [2024-07-24 20:08:53.947746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.101 [2024-07-24 20:08:53.947753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.101 qpair failed and we were unable to recover it. 00:29:06.101 [2024-07-24 20:08:53.948084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.101 [2024-07-24 20:08:53.948091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.101 qpair failed and we were unable to recover it. 00:29:06.101 [2024-07-24 20:08:53.948528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.101 [2024-07-24 20:08:53.948535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.101 qpair failed and we were unable to recover it. 00:29:06.101 [2024-07-24 20:08:53.948937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.101 [2024-07-24 20:08:53.948943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.101 qpair failed and we were unable to recover it. 00:29:06.101 [2024-07-24 20:08:53.949341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.101 [2024-07-24 20:08:53.949348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.101 qpair failed and we were unable to recover it. 00:29:06.101 [2024-07-24 20:08:53.949753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.101 [2024-07-24 20:08:53.949759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.101 qpair failed and we were unable to recover it. 00:29:06.101 [2024-07-24 20:08:53.950041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.101 [2024-07-24 20:08:53.950050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.101 qpair failed and we were unable to recover it. 00:29:06.101 [2024-07-24 20:08:53.950477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.101 [2024-07-24 20:08:53.950484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.101 qpair failed and we were unable to recover it. 00:29:06.101 [2024-07-24 20:08:53.950881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.101 [2024-07-24 20:08:53.950887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.101 qpair failed and we were unable to recover it. 00:29:06.101 [2024-07-24 20:08:53.951298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.101 [2024-07-24 20:08:53.951304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.101 qpair failed and we were unable to recover it. 00:29:06.101 [2024-07-24 20:08:53.951706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.101 [2024-07-24 20:08:53.951713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.101 qpair failed and we were unable to recover it. 00:29:06.102 [2024-07-24 20:08:53.952154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.102 [2024-07-24 20:08:53.952160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.102 qpair failed and we were unable to recover it. 00:29:06.102 [2024-07-24 20:08:53.952578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.102 [2024-07-24 20:08:53.952585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.102 qpair failed and we were unable to recover it. 00:29:06.102 [2024-07-24 20:08:53.952985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.102 [2024-07-24 20:08:53.952992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.102 qpair failed and we were unable to recover it. 00:29:06.102 [2024-07-24 20:08:53.953488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.102 [2024-07-24 20:08:53.953516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.102 qpair failed and we were unable to recover it. 00:29:06.102 [2024-07-24 20:08:53.953931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.102 [2024-07-24 20:08:53.953940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.102 qpair failed and we were unable to recover it. 00:29:06.102 [2024-07-24 20:08:53.954453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.102 [2024-07-24 20:08:53.954481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.102 qpair failed and we were unable to recover it. 00:29:06.102 [2024-07-24 20:08:53.954911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.102 [2024-07-24 20:08:53.954920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.102 qpair failed and we were unable to recover it. 00:29:06.102 [2024-07-24 20:08:53.955326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.102 [2024-07-24 20:08:53.955333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.102 qpair failed and we were unable to recover it. 00:29:06.102 [2024-07-24 20:08:53.955770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.102 [2024-07-24 20:08:53.955777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.102 qpair failed and we were unable to recover it. 00:29:06.102 [2024-07-24 20:08:53.956190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.102 [2024-07-24 20:08:53.956197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.102 qpair failed and we were unable to recover it. 00:29:06.102 [2024-07-24 20:08:53.956610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.102 [2024-07-24 20:08:53.956617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.102 qpair failed and we were unable to recover it. 00:29:06.102 [2024-07-24 20:08:53.956890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.102 [2024-07-24 20:08:53.956907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.102 qpair failed and we were unable to recover it. 00:29:06.102 [2024-07-24 20:08:53.957449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.102 [2024-07-24 20:08:53.957476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.102 qpair failed and we were unable to recover it. 00:29:06.102 [2024-07-24 20:08:53.957895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.102 [2024-07-24 20:08:53.957904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.102 qpair failed and we were unable to recover it. 00:29:06.102 [2024-07-24 20:08:53.958306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.102 [2024-07-24 20:08:53.958314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.102 qpair failed and we were unable to recover it. 00:29:06.102 [2024-07-24 20:08:53.958751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.102 [2024-07-24 20:08:53.958757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.102 qpair failed and we were unable to recover it. 00:29:06.102 [2024-07-24 20:08:53.959165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.102 [2024-07-24 20:08:53.959172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.102 qpair failed and we were unable to recover it. 00:29:06.102 [2024-07-24 20:08:53.959563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.102 [2024-07-24 20:08:53.959571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.102 qpair failed and we were unable to recover it. 00:29:06.102 [2024-07-24 20:08:53.959885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.102 [2024-07-24 20:08:53.959892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.102 qpair failed and we were unable to recover it. 00:29:06.102 [2024-07-24 20:08:53.960303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.102 [2024-07-24 20:08:53.960310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.102 qpair failed and we were unable to recover it. 00:29:06.102 [2024-07-24 20:08:53.960714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.102 [2024-07-24 20:08:53.960721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.102 qpair failed and we were unable to recover it. 00:29:06.102 [2024-07-24 20:08:53.961124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.102 [2024-07-24 20:08:53.961130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.102 qpair failed and we were unable to recover it. 00:29:06.102 [2024-07-24 20:08:53.961529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.102 [2024-07-24 20:08:53.961536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.102 qpair failed and we were unable to recover it. 00:29:06.102 [2024-07-24 20:08:53.961985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.102 [2024-07-24 20:08:53.961994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.102 qpair failed and we were unable to recover it. 00:29:06.102 [2024-07-24 20:08:53.962386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.102 [2024-07-24 20:08:53.962393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.102 qpair failed and we were unable to recover it. 00:29:06.102 [2024-07-24 20:08:53.962823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.102 [2024-07-24 20:08:53.962829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.102 qpair failed and we were unable to recover it. 00:29:06.102 [2024-07-24 20:08:53.963260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.102 [2024-07-24 20:08:53.963267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.102 qpair failed and we were unable to recover it. 00:29:06.102 [2024-07-24 20:08:53.963692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.102 [2024-07-24 20:08:53.963699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.102 qpair failed and we were unable to recover it. 00:29:06.102 [2024-07-24 20:08:53.964142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.102 [2024-07-24 20:08:53.964150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.102 qpair failed and we were unable to recover it. 00:29:06.102 [2024-07-24 20:08:53.964575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.102 [2024-07-24 20:08:53.964582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.102 qpair failed and we were unable to recover it. 00:29:06.102 [2024-07-24 20:08:53.965004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.102 [2024-07-24 20:08:53.965010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.102 qpair failed and we were unable to recover it. 00:29:06.102 [2024-07-24 20:08:53.965507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.103 [2024-07-24 20:08:53.965534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.103 qpair failed and we were unable to recover it. 00:29:06.103 [2024-07-24 20:08:53.965952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.103 [2024-07-24 20:08:53.965960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.103 qpair failed and we were unable to recover it. 00:29:06.103 [2024-07-24 20:08:53.966479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.103 [2024-07-24 20:08:53.966507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.103 qpair failed and we were unable to recover it. 00:29:06.103 [2024-07-24 20:08:53.966927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.103 [2024-07-24 20:08:53.966935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.103 qpair failed and we were unable to recover it. 00:29:06.103 [2024-07-24 20:08:53.967442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.103 [2024-07-24 20:08:53.967470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.103 qpair failed and we were unable to recover it. 00:29:06.103 [2024-07-24 20:08:53.967924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.103 [2024-07-24 20:08:53.967933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.103 qpair failed and we were unable to recover it. 00:29:06.103 [2024-07-24 20:08:53.968426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.103 [2024-07-24 20:08:53.968453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.103 qpair failed and we were unable to recover it. 00:29:06.103 [2024-07-24 20:08:53.968870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.103 [2024-07-24 20:08:53.968878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.103 qpair failed and we were unable to recover it. 00:29:06.103 [2024-07-24 20:08:53.969302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.103 [2024-07-24 20:08:53.969310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.103 qpair failed and we were unable to recover it. 00:29:06.103 [2024-07-24 20:08:53.969746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.103 [2024-07-24 20:08:53.969753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.103 qpair failed and we were unable to recover it. 00:29:06.103 [2024-07-24 20:08:53.969971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.103 [2024-07-24 20:08:53.969977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.103 qpair failed and we were unable to recover it. 00:29:06.103 [2024-07-24 20:08:53.970377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.103 [2024-07-24 20:08:53.970384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.103 qpair failed and we were unable to recover it. 00:29:06.103 [2024-07-24 20:08:53.970831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.103 [2024-07-24 20:08:53.970838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.103 qpair failed and we were unable to recover it. 00:29:06.103 [2024-07-24 20:08:53.971329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.103 [2024-07-24 20:08:53.971336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.103 qpair failed and we were unable to recover it. 00:29:06.103 [2024-07-24 20:08:53.971563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.103 [2024-07-24 20:08:53.971572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.103 qpair failed and we were unable to recover it. 00:29:06.103 [2024-07-24 20:08:53.971994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.103 [2024-07-24 20:08:53.972001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.103 qpair failed and we were unable to recover it. 00:29:06.103 [2024-07-24 20:08:53.972404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.103 [2024-07-24 20:08:53.972411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.103 qpair failed and we were unable to recover it. 00:29:06.103 [2024-07-24 20:08:53.972857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.103 [2024-07-24 20:08:53.972863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.103 qpair failed and we were unable to recover it. 00:29:06.103 [2024-07-24 20:08:53.973268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.103 [2024-07-24 20:08:53.973274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.103 qpair failed and we were unable to recover it. 00:29:06.103 [2024-07-24 20:08:53.973690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.103 [2024-07-24 20:08:53.973697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.103 qpair failed and we were unable to recover it. 00:29:06.103 [2024-07-24 20:08:53.974106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.103 [2024-07-24 20:08:53.974113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.103 qpair failed and we were unable to recover it. 00:29:06.103 [2024-07-24 20:08:53.974311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.103 [2024-07-24 20:08:53.974321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.103 qpair failed and we were unable to recover it. 00:29:06.103 [2024-07-24 20:08:53.974634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.103 [2024-07-24 20:08:53.974641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.103 qpair failed and we were unable to recover it. 00:29:06.103 [2024-07-24 20:08:53.975072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.103 [2024-07-24 20:08:53.975080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.103 qpair failed and we were unable to recover it. 00:29:06.103 [2024-07-24 20:08:53.975539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.103 [2024-07-24 20:08:53.975547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.103 qpair failed and we were unable to recover it. 00:29:06.103 [2024-07-24 20:08:53.975994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.103 [2024-07-24 20:08:53.976000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.103 qpair failed and we were unable to recover it. 00:29:06.103 [2024-07-24 20:08:53.976508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.103 [2024-07-24 20:08:53.976535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.103 qpair failed and we were unable to recover it. 00:29:06.103 [2024-07-24 20:08:53.976846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.103 [2024-07-24 20:08:53.976855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.103 qpair failed and we were unable to recover it. 00:29:06.103 [2024-07-24 20:08:53.977310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.103 [2024-07-24 20:08:53.977317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.103 qpair failed and we were unable to recover it. 00:29:06.103 [2024-07-24 20:08:53.977734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.103 [2024-07-24 20:08:53.977741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.103 qpair failed and we were unable to recover it. 00:29:06.103 [2024-07-24 20:08:53.978061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.103 [2024-07-24 20:08:53.978067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.103 qpair failed and we were unable to recover it. 00:29:06.103 [2024-07-24 20:08:53.978486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.103 [2024-07-24 20:08:53.978494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.103 qpair failed and we were unable to recover it. 00:29:06.103 [2024-07-24 20:08:53.978940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.103 [2024-07-24 20:08:53.978949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.103 qpair failed and we were unable to recover it. 00:29:06.103 [2024-07-24 20:08:53.979479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.103 [2024-07-24 20:08:53.979506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.103 qpair failed and we were unable to recover it. 00:29:06.103 [2024-07-24 20:08:53.979927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.103 [2024-07-24 20:08:53.979935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.103 qpair failed and we were unable to recover it. 00:29:06.103 [2024-07-24 20:08:53.980463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.103 [2024-07-24 20:08:53.980490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.103 qpair failed and we were unable to recover it. 00:29:06.103 [2024-07-24 20:08:53.980915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.103 [2024-07-24 20:08:53.980924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.103 qpair failed and we were unable to recover it. 00:29:06.103 [2024-07-24 20:08:53.981442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.104 [2024-07-24 20:08:53.981470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.104 qpair failed and we were unable to recover it. 00:29:06.104 [2024-07-24 20:08:53.981891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.104 [2024-07-24 20:08:53.981899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.104 qpair failed and we were unable to recover it. 00:29:06.104 [2024-07-24 20:08:53.982296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.104 [2024-07-24 20:08:53.982303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.104 qpair failed and we were unable to recover it. 00:29:06.104 [2024-07-24 20:08:53.982640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.104 [2024-07-24 20:08:53.982647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.104 qpair failed and we were unable to recover it. 00:29:06.104 [2024-07-24 20:08:53.983087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.104 [2024-07-24 20:08:53.983093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.104 qpair failed and we were unable to recover it. 00:29:06.104 [2024-07-24 20:08:53.983527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.104 [2024-07-24 20:08:53.983534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.104 qpair failed and we were unable to recover it. 00:29:06.104 [2024-07-24 20:08:53.983935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.104 [2024-07-24 20:08:53.983941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.104 qpair failed and we were unable to recover it. 00:29:06.104 [2024-07-24 20:08:53.984343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.104 [2024-07-24 20:08:53.984350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.104 qpair failed and we were unable to recover it. 00:29:06.104 [2024-07-24 20:08:53.984774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.104 [2024-07-24 20:08:53.984781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.104 qpair failed and we were unable to recover it. 00:29:06.104 [2024-07-24 20:08:53.985181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.104 [2024-07-24 20:08:53.985188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.104 qpair failed and we were unable to recover it. 00:29:06.104 [2024-07-24 20:08:53.985610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.104 [2024-07-24 20:08:53.985618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.104 qpair failed and we were unable to recover it. 00:29:06.104 [2024-07-24 20:08:53.986043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.104 [2024-07-24 20:08:53.986051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.104 qpair failed and we were unable to recover it. 00:29:06.104 [2024-07-24 20:08:53.986561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.104 [2024-07-24 20:08:53.986588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.104 qpair failed and we were unable to recover it. 00:29:06.104 [2024-07-24 20:08:53.987007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.104 [2024-07-24 20:08:53.987015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.104 qpair failed and we were unable to recover it. 00:29:06.104 [2024-07-24 20:08:53.987530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.104 [2024-07-24 20:08:53.987557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.104 qpair failed and we were unable to recover it. 00:29:06.104 [2024-07-24 20:08:53.987977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.104 [2024-07-24 20:08:53.987985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.104 qpair failed and we were unable to recover it. 00:29:06.104 [2024-07-24 20:08:53.988504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.104 [2024-07-24 20:08:53.988532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.104 qpair failed and we were unable to recover it. 00:29:06.104 [2024-07-24 20:08:53.989024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.104 [2024-07-24 20:08:53.989033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.104 qpair failed and we were unable to recover it. 00:29:06.104 [2024-07-24 20:08:53.989518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.104 [2024-07-24 20:08:53.989545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.104 qpair failed and we were unable to recover it. 00:29:06.104 [2024-07-24 20:08:53.989960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.104 [2024-07-24 20:08:53.989968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.104 qpair failed and we were unable to recover it. 00:29:06.104 [2024-07-24 20:08:53.990514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.104 [2024-07-24 20:08:53.990541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.104 qpair failed and we were unable to recover it. 00:29:06.104 [2024-07-24 20:08:53.990959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.104 [2024-07-24 20:08:53.990968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.104 qpair failed and we were unable to recover it. 00:29:06.104 [2024-07-24 20:08:53.991175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.104 [2024-07-24 20:08:53.991186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.104 qpair failed and we were unable to recover it. 00:29:06.104 [2024-07-24 20:08:53.991508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.104 [2024-07-24 20:08:53.991516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.104 qpair failed and we were unable to recover it. 00:29:06.104 [2024-07-24 20:08:53.991916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.104 [2024-07-24 20:08:53.991924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.104 qpair failed and we were unable to recover it. 00:29:06.104 [2024-07-24 20:08:53.992443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.104 [2024-07-24 20:08:53.992471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.104 qpair failed and we were unable to recover it. 00:29:06.104 [2024-07-24 20:08:53.992891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.104 [2024-07-24 20:08:53.992899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.104 qpair failed and we were unable to recover it. 00:29:06.104 [2024-07-24 20:08:53.993423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.104 [2024-07-24 20:08:53.993450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.104 qpair failed and we were unable to recover it. 00:29:06.104 [2024-07-24 20:08:53.993866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.104 [2024-07-24 20:08:53.993874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.104 qpair failed and we were unable to recover it. 00:29:06.104 [2024-07-24 20:08:53.994283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.104 [2024-07-24 20:08:53.994291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.104 qpair failed and we were unable to recover it. 00:29:06.104 [2024-07-24 20:08:53.994705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.104 [2024-07-24 20:08:53.994712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.104 qpair failed and we were unable to recover it. 00:29:06.104 [2024-07-24 20:08:53.995158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.104 [2024-07-24 20:08:53.995164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.104 qpair failed and we were unable to recover it. 00:29:06.104 [2024-07-24 20:08:53.995447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.104 [2024-07-24 20:08:53.995456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.104 qpair failed and we were unable to recover it. 00:29:06.104 [2024-07-24 20:08:53.995883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.104 [2024-07-24 20:08:53.995889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.104 qpair failed and we were unable to recover it. 00:29:06.104 [2024-07-24 20:08:53.996288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.104 [2024-07-24 20:08:53.996295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.104 qpair failed and we were unable to recover it. 00:29:06.104 [2024-07-24 20:08:53.996708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.104 [2024-07-24 20:08:53.996718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.104 qpair failed and we were unable to recover it. 00:29:06.104 [2024-07-24 20:08:53.997124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.104 [2024-07-24 20:08:53.997132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.104 qpair failed and we were unable to recover it. 00:29:06.104 [2024-07-24 20:08:53.997555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.105 [2024-07-24 20:08:53.997563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.105 qpair failed and we were unable to recover it. 00:29:06.105 [2024-07-24 20:08:53.997889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.105 [2024-07-24 20:08:53.997896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.105 qpair failed and we were unable to recover it. 00:29:06.105 [2024-07-24 20:08:53.998320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.105 [2024-07-24 20:08:53.998327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.105 qpair failed and we were unable to recover it. 00:29:06.105 [2024-07-24 20:08:53.998746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.105 [2024-07-24 20:08:53.998752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.105 qpair failed and we were unable to recover it. 00:29:06.105 [2024-07-24 20:08:53.999176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.105 [2024-07-24 20:08:53.999184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.105 qpair failed and we were unable to recover it. 00:29:06.105 [2024-07-24 20:08:53.999547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.105 [2024-07-24 20:08:53.999554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.105 qpair failed and we were unable to recover it. 00:29:06.105 [2024-07-24 20:08:53.999992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.105 [2024-07-24 20:08:53.999999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.105 qpair failed and we were unable to recover it. 00:29:06.105 [2024-07-24 20:08:54.000541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.105 [2024-07-24 20:08:54.000569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.105 qpair failed and we were unable to recover it. 00:29:06.105 [2024-07-24 20:08:54.000994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.105 [2024-07-24 20:08:54.001003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.105 qpair failed and we were unable to recover it. 00:29:06.105 [2024-07-24 20:08:54.001497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.105 [2024-07-24 20:08:54.001524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.105 qpair failed and we were unable to recover it. 00:29:06.105 [2024-07-24 20:08:54.001997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.105 [2024-07-24 20:08:54.002005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.105 qpair failed and we were unable to recover it. 00:29:06.105 [2024-07-24 20:08:54.002542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.105 [2024-07-24 20:08:54.002569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.105 qpair failed and we were unable to recover it. 00:29:06.105 [2024-07-24 20:08:54.002987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.105 [2024-07-24 20:08:54.002995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.105 qpair failed and we were unable to recover it. 00:29:06.105 [2024-07-24 20:08:54.003534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.105 [2024-07-24 20:08:54.003562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.105 qpair failed and we were unable to recover it. 00:29:06.105 [2024-07-24 20:08:54.003970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.105 [2024-07-24 20:08:54.003978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.105 qpair failed and we were unable to recover it. 00:29:06.105 [2024-07-24 20:08:54.004500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.105 [2024-07-24 20:08:54.004527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.105 qpair failed and we were unable to recover it. 00:29:06.105 [2024-07-24 20:08:54.004946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.105 [2024-07-24 20:08:54.004954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.105 qpair failed and we were unable to recover it. 00:29:06.105 [2024-07-24 20:08:54.005479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.105 [2024-07-24 20:08:54.005507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.105 qpair failed and we were unable to recover it. 00:29:06.105 [2024-07-24 20:08:54.005988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.105 [2024-07-24 20:08:54.005996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.105 qpair failed and we were unable to recover it. 00:29:06.105 [2024-07-24 20:08:54.006499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.105 [2024-07-24 20:08:54.006526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.105 qpair failed and we were unable to recover it. 00:29:06.105 [2024-07-24 20:08:54.006940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.105 [2024-07-24 20:08:54.006948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.105 qpair failed and we were unable to recover it. 00:29:06.105 [2024-07-24 20:08:54.007470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.105 [2024-07-24 20:08:54.007497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.105 qpair failed and we were unable to recover it. 00:29:06.105 [2024-07-24 20:08:54.007916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.105 [2024-07-24 20:08:54.007925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.105 qpair failed and we were unable to recover it. 00:29:06.105 [2024-07-24 20:08:54.008466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.105 [2024-07-24 20:08:54.008493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.105 qpair failed and we were unable to recover it. 00:29:06.105 [2024-07-24 20:08:54.008911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.105 [2024-07-24 20:08:54.008920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.105 qpair failed and we were unable to recover it. 00:29:06.105 [2024-07-24 20:08:54.009449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.105 [2024-07-24 20:08:54.009476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.105 qpair failed and we were unable to recover it. 00:29:06.105 [2024-07-24 20:08:54.009792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.105 [2024-07-24 20:08:54.009801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.105 qpair failed and we were unable to recover it. 00:29:06.105 [2024-07-24 20:08:54.010207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.105 [2024-07-24 20:08:54.010215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.105 qpair failed and we were unable to recover it. 00:29:06.105 [2024-07-24 20:08:54.010679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.105 [2024-07-24 20:08:54.010685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.105 qpair failed and we were unable to recover it. 00:29:06.105 [2024-07-24 20:08:54.011083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.105 [2024-07-24 20:08:54.011090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.105 qpair failed and we were unable to recover it. 00:29:06.105 [2024-07-24 20:08:54.011529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.105 [2024-07-24 20:08:54.011536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.105 qpair failed and we were unable to recover it. 00:29:06.105 [2024-07-24 20:08:54.011945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.105 [2024-07-24 20:08:54.011951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.105 qpair failed and we were unable to recover it. 00:29:06.105 [2024-07-24 20:08:54.012472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.105 [2024-07-24 20:08:54.012499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.105 qpair failed and we were unable to recover it. 00:29:06.105 [2024-07-24 20:08:54.012924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.105 [2024-07-24 20:08:54.012932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.105 qpair failed and we were unable to recover it. 00:29:06.105 [2024-07-24 20:08:54.013446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.105 [2024-07-24 20:08:54.013473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.105 qpair failed and we were unable to recover it. 00:29:06.105 [2024-07-24 20:08:54.013806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.105 [2024-07-24 20:08:54.013815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.105 qpair failed and we were unable to recover it. 00:29:06.105 [2024-07-24 20:08:54.014223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.105 [2024-07-24 20:08:54.014231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.105 qpair failed and we were unable to recover it. 00:29:06.105 [2024-07-24 20:08:54.014666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.105 [2024-07-24 20:08:54.014673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.105 qpair failed and we were unable to recover it. 00:29:06.106 [2024-07-24 20:08:54.015077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.106 [2024-07-24 20:08:54.015087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.106 qpair failed and we were unable to recover it. 00:29:06.106 [2024-07-24 20:08:54.015572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.106 [2024-07-24 20:08:54.015579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.106 qpair failed and we were unable to recover it. 00:29:06.106 [2024-07-24 20:08:54.015898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.106 [2024-07-24 20:08:54.015905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.106 qpair failed and we were unable to recover it. 00:29:06.106 [2024-07-24 20:08:54.016359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.106 [2024-07-24 20:08:54.016366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.106 qpair failed and we were unable to recover it. 00:29:06.106 [2024-07-24 20:08:54.016799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.106 [2024-07-24 20:08:54.016806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.106 qpair failed and we were unable to recover it. 00:29:06.106 [2024-07-24 20:08:54.017252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.106 [2024-07-24 20:08:54.017259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.106 qpair failed and we were unable to recover it. 00:29:06.106 [2024-07-24 20:08:54.017660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.106 [2024-07-24 20:08:54.017666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.106 qpair failed and we were unable to recover it. 00:29:06.106 [2024-07-24 20:08:54.018106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.106 [2024-07-24 20:08:54.018113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.106 qpair failed and we were unable to recover it. 00:29:06.106 [2024-07-24 20:08:54.018550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.106 [2024-07-24 20:08:54.018558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.106 qpair failed and we were unable to recover it. 00:29:06.106 [2024-07-24 20:08:54.018999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.106 [2024-07-24 20:08:54.019006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.106 qpair failed and we were unable to recover it. 00:29:06.106 [2024-07-24 20:08:54.019435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.106 [2024-07-24 20:08:54.019442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.106 qpair failed and we were unable to recover it. 00:29:06.106 [2024-07-24 20:08:54.019843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.106 [2024-07-24 20:08:54.019849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.106 qpair failed and we were unable to recover it. 00:29:06.106 [2024-07-24 20:08:54.020295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.106 [2024-07-24 20:08:54.020302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.106 qpair failed and we were unable to recover it. 00:29:06.106 [2024-07-24 20:08:54.020724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.106 [2024-07-24 20:08:54.020731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.106 qpair failed and we were unable to recover it. 00:29:06.106 [2024-07-24 20:08:54.021159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.106 [2024-07-24 20:08:54.021166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.106 qpair failed and we were unable to recover it. 00:29:06.106 [2024-07-24 20:08:54.021580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.106 [2024-07-24 20:08:54.021586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.106 qpair failed and we were unable to recover it. 00:29:06.106 [2024-07-24 20:08:54.021987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.106 [2024-07-24 20:08:54.021994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.106 qpair failed and we were unable to recover it. 00:29:06.106 [2024-07-24 20:08:54.022518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.106 [2024-07-24 20:08:54.022545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.106 qpair failed and we were unable to recover it. 00:29:06.106 [2024-07-24 20:08:54.022955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.106 [2024-07-24 20:08:54.022964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.106 qpair failed and we were unable to recover it. 00:29:06.106 [2024-07-24 20:08:54.023482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.106 [2024-07-24 20:08:54.023510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.106 qpair failed and we were unable to recover it. 00:29:06.106 [2024-07-24 20:08:54.023926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.106 [2024-07-24 20:08:54.023935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.106 qpair failed and we were unable to recover it. 00:29:06.106 [2024-07-24 20:08:54.024139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.106 [2024-07-24 20:08:54.024149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.106 qpair failed and we were unable to recover it. 00:29:06.106 [2024-07-24 20:08:54.024589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.106 [2024-07-24 20:08:54.024597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.106 qpair failed and we were unable to recover it. 00:29:06.106 [2024-07-24 20:08:54.025044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.106 [2024-07-24 20:08:54.025050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.106 qpair failed and we were unable to recover it. 00:29:06.106 [2024-07-24 20:08:54.025551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.106 [2024-07-24 20:08:54.025578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.106 qpair failed and we were unable to recover it. 00:29:06.106 [2024-07-24 20:08:54.025997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.106 [2024-07-24 20:08:54.026006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.106 qpair failed and we were unable to recover it. 00:29:06.106 [2024-07-24 20:08:54.026419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.106 [2024-07-24 20:08:54.026448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.106 qpair failed and we were unable to recover it. 00:29:06.106 [2024-07-24 20:08:54.026865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.106 [2024-07-24 20:08:54.026873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.106 qpair failed and we were unable to recover it. 00:29:06.106 [2024-07-24 20:08:54.027278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.106 [2024-07-24 20:08:54.027286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.106 qpair failed and we were unable to recover it. 00:29:06.106 [2024-07-24 20:08:54.027702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.106 [2024-07-24 20:08:54.027709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.106 qpair failed and we were unable to recover it. 00:29:06.106 [2024-07-24 20:08:54.028136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.106 [2024-07-24 20:08:54.028144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.106 qpair failed and we were unable to recover it. 00:29:06.106 [2024-07-24 20:08:54.028573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.106 [2024-07-24 20:08:54.028580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.106 qpair failed and we were unable to recover it. 00:29:06.106 [2024-07-24 20:08:54.029003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.106 [2024-07-24 20:08:54.029010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.106 qpair failed and we were unable to recover it. 00:29:06.106 [2024-07-24 20:08:54.029515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.106 [2024-07-24 20:08:54.029542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.106 qpair failed and we were unable to recover it. 00:29:06.106 [2024-07-24 20:08:54.029955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.106 [2024-07-24 20:08:54.029964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.106 qpair failed and we were unable to recover it. 00:29:06.106 [2024-07-24 20:08:54.030410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.106 [2024-07-24 20:08:54.030437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.106 qpair failed and we were unable to recover it. 00:29:06.106 [2024-07-24 20:08:54.030853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.106 [2024-07-24 20:08:54.030862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.106 qpair failed and we were unable to recover it. 00:29:06.107 [2024-07-24 20:08:54.031187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.107 [2024-07-24 20:08:54.031194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.107 qpair failed and we were unable to recover it. 00:29:06.107 [2024-07-24 20:08:54.031631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.107 [2024-07-24 20:08:54.031638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.107 qpair failed and we were unable to recover it. 00:29:06.107 [2024-07-24 20:08:54.032090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.107 [2024-07-24 20:08:54.032096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.107 qpair failed and we were unable to recover it. 00:29:06.379 [2024-07-24 20:08:54.032535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.379 [2024-07-24 20:08:54.032547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.379 qpair failed and we were unable to recover it. 00:29:06.379 [2024-07-24 20:08:54.033011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.379 [2024-07-24 20:08:54.033018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.379 qpair failed and we were unable to recover it. 00:29:06.379 [2024-07-24 20:08:54.033526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.379 [2024-07-24 20:08:54.033553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.379 qpair failed and we were unable to recover it. 00:29:06.379 [2024-07-24 20:08:54.033965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.379 [2024-07-24 20:08:54.033973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.379 qpair failed and we were unable to recover it. 00:29:06.379 [2024-07-24 20:08:54.034493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.379 [2024-07-24 20:08:54.034522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.379 qpair failed and we were unable to recover it. 00:29:06.379 [2024-07-24 20:08:54.034965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.379 [2024-07-24 20:08:54.034974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.379 qpair failed and we were unable to recover it. 00:29:06.379 [2024-07-24 20:08:54.035489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.379 [2024-07-24 20:08:54.035516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.379 qpair failed and we were unable to recover it. 00:29:06.379 [2024-07-24 20:08:54.035938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.379 [2024-07-24 20:08:54.035946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.379 qpair failed and we were unable to recover it. 00:29:06.379 [2024-07-24 20:08:54.036484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.379 [2024-07-24 20:08:54.036511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.379 qpair failed and we were unable to recover it. 00:29:06.379 [2024-07-24 20:08:54.036927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.379 [2024-07-24 20:08:54.036935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.379 qpair failed and we were unable to recover it. 00:29:06.379 [2024-07-24 20:08:54.037455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.379 [2024-07-24 20:08:54.037483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.379 qpair failed and we were unable to recover it. 00:29:06.379 [2024-07-24 20:08:54.037794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.379 [2024-07-24 20:08:54.037804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.379 qpair failed and we were unable to recover it. 00:29:06.379 [2024-07-24 20:08:54.038309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.379 [2024-07-24 20:08:54.038316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.379 qpair failed and we were unable to recover it. 00:29:06.379 [2024-07-24 20:08:54.038766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.379 [2024-07-24 20:08:54.038773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.379 qpair failed and we were unable to recover it. 00:29:06.379 [2024-07-24 20:08:54.039175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.379 [2024-07-24 20:08:54.039182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.379 qpair failed and we were unable to recover it. 00:29:06.379 [2024-07-24 20:08:54.039598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.379 [2024-07-24 20:08:54.039606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.379 qpair failed and we were unable to recover it. 00:29:06.379 [2024-07-24 20:08:54.040008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.379 [2024-07-24 20:08:54.040014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.379 qpair failed and we were unable to recover it. 00:29:06.379 [2024-07-24 20:08:54.040555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.379 [2024-07-24 20:08:54.040583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.379 qpair failed and we were unable to recover it. 00:29:06.379 [2024-07-24 20:08:54.041007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.379 [2024-07-24 20:08:54.041015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.379 qpair failed and we were unable to recover it. 00:29:06.379 [2024-07-24 20:08:54.041228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.379 [2024-07-24 20:08:54.041240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.379 qpair failed and we were unable to recover it. 00:29:06.379 [2024-07-24 20:08:54.041637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.379 [2024-07-24 20:08:54.041645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.379 qpair failed and we were unable to recover it. 00:29:06.379 [2024-07-24 20:08:54.042055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.379 [2024-07-24 20:08:54.042063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.379 qpair failed and we were unable to recover it. 00:29:06.379 [2024-07-24 20:08:54.042578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.379 [2024-07-24 20:08:54.042605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.379 qpair failed and we were unable to recover it. 00:29:06.379 [2024-07-24 20:08:54.043030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.379 [2024-07-24 20:08:54.043038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.379 qpair failed and we were unable to recover it. 00:29:06.379 [2024-07-24 20:08:54.043566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.379 [2024-07-24 20:08:54.043593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.379 qpair failed and we were unable to recover it. 00:29:06.379 [2024-07-24 20:08:54.043916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.379 [2024-07-24 20:08:54.043925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.379 qpair failed and we were unable to recover it. 00:29:06.379 [2024-07-24 20:08:54.044477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.379 [2024-07-24 20:08:54.044504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.379 qpair failed and we were unable to recover it. 00:29:06.379 [2024-07-24 20:08:54.044831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.379 [2024-07-24 20:08:54.044840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.379 qpair failed and we were unable to recover it. 00:29:06.379 [2024-07-24 20:08:54.045281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.379 [2024-07-24 20:08:54.045290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.379 qpair failed and we were unable to recover it. 00:29:06.379 [2024-07-24 20:08:54.045734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.379 [2024-07-24 20:08:54.045740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.379 qpair failed and we were unable to recover it. 00:29:06.379 [2024-07-24 20:08:54.046148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.379 [2024-07-24 20:08:54.046155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.379 qpair failed and we were unable to recover it. 00:29:06.379 [2024-07-24 20:08:54.046631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.379 [2024-07-24 20:08:54.046638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.379 qpair failed and we were unable to recover it. 00:29:06.379 [2024-07-24 20:08:54.047043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.379 [2024-07-24 20:08:54.047050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.379 qpair failed and we were unable to recover it. 00:29:06.379 [2024-07-24 20:08:54.047477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.379 [2024-07-24 20:08:54.047505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.379 qpair failed and we were unable to recover it. 00:29:06.379 [2024-07-24 20:08:54.047927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-07-24 20:08:54.047937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.380 qpair failed and we were unable to recover it. 00:29:06.380 [2024-07-24 20:08:54.048459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-07-24 20:08:54.048486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.380 qpair failed and we were unable to recover it. 00:29:06.380 [2024-07-24 20:08:54.048910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-07-24 20:08:54.048919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.380 qpair failed and we were unable to recover it. 00:29:06.380 [2024-07-24 20:08:54.049422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-07-24 20:08:54.049450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.380 qpair failed and we were unable to recover it. 00:29:06.380 [2024-07-24 20:08:54.049875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-07-24 20:08:54.049883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.380 qpair failed and we were unable to recover it. 00:29:06.380 [2024-07-24 20:08:54.050296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-07-24 20:08:54.050303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.380 qpair failed and we were unable to recover it. 00:29:06.380 [2024-07-24 20:08:54.050749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-07-24 20:08:54.050759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.380 qpair failed and we were unable to recover it. 00:29:06.380 [2024-07-24 20:08:54.051212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-07-24 20:08:54.051220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.380 qpair failed and we were unable to recover it. 00:29:06.380 [2024-07-24 20:08:54.051682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-07-24 20:08:54.051688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.380 qpair failed and we were unable to recover it. 00:29:06.380 [2024-07-24 20:08:54.052098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-07-24 20:08:54.052105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.380 qpair failed and we were unable to recover it. 00:29:06.380 [2024-07-24 20:08:54.052433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-07-24 20:08:54.052440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.380 qpair failed and we were unable to recover it. 00:29:06.380 [2024-07-24 20:08:54.052863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-07-24 20:08:54.052871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.380 qpair failed and we were unable to recover it. 00:29:06.380 [2024-07-24 20:08:54.053166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-07-24 20:08:54.053173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.380 qpair failed and we were unable to recover it. 00:29:06.380 [2024-07-24 20:08:54.053578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-07-24 20:08:54.053585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.380 qpair failed and we were unable to recover it. 00:29:06.380 [2024-07-24 20:08:54.054016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-07-24 20:08:54.054022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.380 qpair failed and we were unable to recover it. 00:29:06.380 [2024-07-24 20:08:54.054340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-07-24 20:08:54.054352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.380 qpair failed and we were unable to recover it. 00:29:06.380 [2024-07-24 20:08:54.054783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-07-24 20:08:54.054790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.380 qpair failed and we were unable to recover it. 00:29:06.380 [2024-07-24 20:08:54.055244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-07-24 20:08:54.055251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.380 qpair failed and we were unable to recover it. 00:29:06.380 [2024-07-24 20:08:54.055701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-07-24 20:08:54.055708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.380 qpair failed and we were unable to recover it. 00:29:06.380 [2024-07-24 20:08:54.056160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-07-24 20:08:54.056167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.380 qpair failed and we were unable to recover it. 00:29:06.380 [2024-07-24 20:08:54.056445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-07-24 20:08:54.056453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.380 qpair failed and we were unable to recover it. 00:29:06.380 [2024-07-24 20:08:54.056787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-07-24 20:08:54.056794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.380 qpair failed and we were unable to recover it. 00:29:06.380 [2024-07-24 20:08:54.057194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-07-24 20:08:54.057205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.380 qpair failed and we were unable to recover it. 00:29:06.380 [2024-07-24 20:08:54.057617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-07-24 20:08:54.057624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.380 qpair failed and we were unable to recover it. 00:29:06.380 [2024-07-24 20:08:54.057912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-07-24 20:08:54.057920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.380 qpair failed and we were unable to recover it. 00:29:06.380 [2024-07-24 20:08:54.058360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-07-24 20:08:54.058367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.380 qpair failed and we were unable to recover it. 00:29:06.380 [2024-07-24 20:08:54.058861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-07-24 20:08:54.058869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.380 qpair failed and we were unable to recover it. 00:29:06.380 [2024-07-24 20:08:54.059312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-07-24 20:08:54.059319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.380 qpair failed and we were unable to recover it. 00:29:06.380 [2024-07-24 20:08:54.059723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-07-24 20:08:54.059729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.380 qpair failed and we were unable to recover it. 00:29:06.380 [2024-07-24 20:08:54.060161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-07-24 20:08:54.060168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.380 qpair failed and we were unable to recover it. 00:29:06.380 [2024-07-24 20:08:54.060589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-07-24 20:08:54.060596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.380 qpair failed and we were unable to recover it. 00:29:06.380 [2024-07-24 20:08:54.060994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-07-24 20:08:54.061000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.380 qpair failed and we were unable to recover it. 00:29:06.380 [2024-07-24 20:08:54.061534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-07-24 20:08:54.061562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.380 qpair failed and we were unable to recover it. 00:29:06.380 [2024-07-24 20:08:54.061895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-07-24 20:08:54.061904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.380 qpair failed and we were unable to recover it. 00:29:06.380 [2024-07-24 20:08:54.062221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-07-24 20:08:54.062229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.380 qpair failed and we were unable to recover it. 00:29:06.380 [2024-07-24 20:08:54.062658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-07-24 20:08:54.062665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.380 qpair failed and we were unable to recover it. 00:29:06.380 [2024-07-24 20:08:54.063070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-07-24 20:08:54.063077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.380 qpair failed and we were unable to recover it. 00:29:06.380 [2024-07-24 20:08:54.063367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.380 [2024-07-24 20:08:54.063374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.381 qpair failed and we were unable to recover it. 00:29:06.381 [2024-07-24 20:08:54.063566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-07-24 20:08:54.063572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.381 qpair failed and we were unable to recover it. 00:29:06.381 [2024-07-24 20:08:54.063978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-07-24 20:08:54.063985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.381 qpair failed and we were unable to recover it. 00:29:06.381 [2024-07-24 20:08:54.064414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-07-24 20:08:54.064422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.381 qpair failed and we were unable to recover it. 00:29:06.381 [2024-07-24 20:08:54.064787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-07-24 20:08:54.064793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.381 qpair failed and we were unable to recover it. 00:29:06.381 [2024-07-24 20:08:54.065246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-07-24 20:08:54.065253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.381 qpair failed and we were unable to recover it. 00:29:06.381 [2024-07-24 20:08:54.065666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-07-24 20:08:54.065672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.381 qpair failed and we were unable to recover it. 00:29:06.381 [2024-07-24 20:08:54.065985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-07-24 20:08:54.065991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.381 qpair failed and we were unable to recover it. 00:29:06.381 [2024-07-24 20:08:54.066326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-07-24 20:08:54.066332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.381 qpair failed and we were unable to recover it. 00:29:06.381 [2024-07-24 20:08:54.066661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-07-24 20:08:54.066669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.381 qpair failed and we were unable to recover it. 00:29:06.381 [2024-07-24 20:08:54.067081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-07-24 20:08:54.067087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.381 qpair failed and we were unable to recover it. 00:29:06.381 [2024-07-24 20:08:54.067532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-07-24 20:08:54.067538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.381 qpair failed and we were unable to recover it. 00:29:06.381 [2024-07-24 20:08:54.067965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-07-24 20:08:54.067972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.381 qpair failed and we were unable to recover it. 00:29:06.381 [2024-07-24 20:08:54.068399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-07-24 20:08:54.068406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.381 qpair failed and we were unable to recover it. 00:29:06.381 [2024-07-24 20:08:54.068850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-07-24 20:08:54.068856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.381 qpair failed and we were unable to recover it. 00:29:06.381 [2024-07-24 20:08:54.069144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-07-24 20:08:54.069152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.381 qpair failed and we were unable to recover it. 00:29:06.381 [2024-07-24 20:08:54.069654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-07-24 20:08:54.069662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.381 qpair failed and we were unable to recover it. 00:29:06.381 [2024-07-24 20:08:54.070082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-07-24 20:08:54.070089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.381 qpair failed and we were unable to recover it. 00:29:06.381 [2024-07-24 20:08:54.070465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-07-24 20:08:54.070472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.381 qpair failed and we were unable to recover it. 00:29:06.381 [2024-07-24 20:08:54.070887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-07-24 20:08:54.070895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.381 qpair failed and we were unable to recover it. 00:29:06.381 [2024-07-24 20:08:54.071319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-07-24 20:08:54.071327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.381 qpair failed and we were unable to recover it. 00:29:06.381 [2024-07-24 20:08:54.071770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-07-24 20:08:54.071778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.381 qpair failed and we were unable to recover it. 00:29:06.381 [2024-07-24 20:08:54.072220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-07-24 20:08:54.072228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.381 qpair failed and we were unable to recover it. 00:29:06.381 [2024-07-24 20:08:54.072574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-07-24 20:08:54.072581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.381 qpair failed and we were unable to recover it. 00:29:06.381 [2024-07-24 20:08:54.073009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-07-24 20:08:54.073016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.381 qpair failed and we were unable to recover it. 00:29:06.381 [2024-07-24 20:08:54.073442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-07-24 20:08:54.073449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.381 qpair failed and we were unable to recover it. 00:29:06.381 [2024-07-24 20:08:54.073865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-07-24 20:08:54.073873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.381 qpair failed and we were unable to recover it. 00:29:06.381 [2024-07-24 20:08:54.074298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-07-24 20:08:54.074306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.381 qpair failed and we were unable to recover it. 00:29:06.381 [2024-07-24 20:08:54.074816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-07-24 20:08:54.074823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.381 qpair failed and we were unable to recover it. 00:29:06.381 [2024-07-24 20:08:54.075234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-07-24 20:08:54.075241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.381 qpair failed and we were unable to recover it. 00:29:06.381 [2024-07-24 20:08:54.075688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-07-24 20:08:54.075696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.381 qpair failed and we were unable to recover it. 00:29:06.381 [2024-07-24 20:08:54.076066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-07-24 20:08:54.076073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.381 qpair failed and we were unable to recover it. 00:29:06.381 [2024-07-24 20:08:54.076491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-07-24 20:08:54.076498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.381 qpair failed and we were unable to recover it. 00:29:06.381 [2024-07-24 20:08:54.076923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-07-24 20:08:54.076931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.381 qpair failed and we were unable to recover it. 00:29:06.381 [2024-07-24 20:08:54.077448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-07-24 20:08:54.077475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.381 qpair failed and we were unable to recover it. 00:29:06.381 [2024-07-24 20:08:54.077913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-07-24 20:08:54.077923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.381 qpair failed and we were unable to recover it. 00:29:06.381 [2024-07-24 20:08:54.078401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-07-24 20:08:54.078429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.381 qpair failed and we were unable to recover it. 00:29:06.381 [2024-07-24 20:08:54.078859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.381 [2024-07-24 20:08:54.078868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.381 qpair failed and we were unable to recover it. 00:29:06.381 [2024-07-24 20:08:54.079324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.382 [2024-07-24 20:08:54.079333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.382 qpair failed and we were unable to recover it. 00:29:06.382 [2024-07-24 20:08:54.079759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.382 [2024-07-24 20:08:54.079766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.382 qpair failed and we were unable to recover it. 00:29:06.382 [2024-07-24 20:08:54.080262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.382 [2024-07-24 20:08:54.080270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.382 qpair failed and we were unable to recover it. 00:29:06.382 [2024-07-24 20:08:54.080603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.382 [2024-07-24 20:08:54.080610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.382 qpair failed and we were unable to recover it. 00:29:06.382 [2024-07-24 20:08:54.081056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.382 [2024-07-24 20:08:54.081063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.382 qpair failed and we were unable to recover it. 00:29:06.382 [2024-07-24 20:08:54.081488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.382 [2024-07-24 20:08:54.081496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.382 qpair failed and we were unable to recover it. 00:29:06.382 [2024-07-24 20:08:54.082006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.382 [2024-07-24 20:08:54.082014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.382 qpair failed and we were unable to recover it. 00:29:06.382 [2024-07-24 20:08:54.082492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.382 [2024-07-24 20:08:54.082521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.382 qpair failed and we were unable to recover it. 00:29:06.382 [2024-07-24 20:08:54.082971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.382 [2024-07-24 20:08:54.082981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.382 qpair failed and we were unable to recover it. 00:29:06.382 [2024-07-24 20:08:54.083511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.382 [2024-07-24 20:08:54.083539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.382 qpair failed and we were unable to recover it. 00:29:06.382 [2024-07-24 20:08:54.083974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.382 [2024-07-24 20:08:54.083983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.382 qpair failed and we were unable to recover it. 00:29:06.382 [2024-07-24 20:08:54.084198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.382 [2024-07-24 20:08:54.084215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.382 qpair failed and we were unable to recover it. 00:29:06.382 [2024-07-24 20:08:54.084494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.382 [2024-07-24 20:08:54.084502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.382 qpair failed and we were unable to recover it. 00:29:06.382 [2024-07-24 20:08:54.084936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.382 [2024-07-24 20:08:54.084944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.382 qpair failed and we were unable to recover it. 00:29:06.382 [2024-07-24 20:08:54.085468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.382 [2024-07-24 20:08:54.085497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.382 qpair failed and we were unable to recover it. 00:29:06.382 [2024-07-24 20:08:54.085826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.382 [2024-07-24 20:08:54.085835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.382 qpair failed and we were unable to recover it. 00:29:06.382 [2024-07-24 20:08:54.085923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.382 [2024-07-24 20:08:54.085932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.382 qpair failed and we were unable to recover it. 00:29:06.382 [2024-07-24 20:08:54.086320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.382 [2024-07-24 20:08:54.086329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.382 qpair failed and we were unable to recover it. 00:29:06.382 [2024-07-24 20:08:54.086803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.382 [2024-07-24 20:08:54.086810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.382 qpair failed and we were unable to recover it. 00:29:06.382 [2024-07-24 20:08:54.087244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.382 [2024-07-24 20:08:54.087251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.382 qpair failed and we were unable to recover it. 00:29:06.382 [2024-07-24 20:08:54.087676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.382 [2024-07-24 20:08:54.087683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.382 qpair failed and we were unable to recover it. 00:29:06.382 [2024-07-24 20:08:54.087991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.382 [2024-07-24 20:08:54.087999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.382 qpair failed and we were unable to recover it. 00:29:06.382 [2024-07-24 20:08:54.088427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.382 [2024-07-24 20:08:54.088434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.382 qpair failed and we were unable to recover it. 00:29:06.382 [2024-07-24 20:08:54.088840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.382 [2024-07-24 20:08:54.088846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.382 qpair failed and we were unable to recover it. 00:29:06.382 [2024-07-24 20:08:54.089262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.382 [2024-07-24 20:08:54.089269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.382 qpair failed and we were unable to recover it. 00:29:06.382 [2024-07-24 20:08:54.089700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.382 [2024-07-24 20:08:54.089707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.382 qpair failed and we were unable to recover it. 00:29:06.382 [2024-07-24 20:08:54.090132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.382 [2024-07-24 20:08:54.090139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.382 qpair failed and we were unable to recover it. 00:29:06.382 [2024-07-24 20:08:54.090461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.382 [2024-07-24 20:08:54.090468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.382 qpair failed and we were unable to recover it. 00:29:06.382 [2024-07-24 20:08:54.090899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.382 [2024-07-24 20:08:54.090906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.382 qpair failed and we were unable to recover it. 00:29:06.382 [2024-07-24 20:08:54.091353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.382 [2024-07-24 20:08:54.091360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.382 qpair failed and we were unable to recover it. 00:29:06.382 [2024-07-24 20:08:54.091760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.382 [2024-07-24 20:08:54.091767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.382 qpair failed and we were unable to recover it. 00:29:06.382 [2024-07-24 20:08:54.092168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.382 [2024-07-24 20:08:54.092174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.382 qpair failed and we were unable to recover it. 00:29:06.382 [2024-07-24 20:08:54.092591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.382 [2024-07-24 20:08:54.092597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.382 qpair failed and we were unable to recover it. 00:29:06.382 [2024-07-24 20:08:54.092997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.382 [2024-07-24 20:08:54.093003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.382 qpair failed and we were unable to recover it. 00:29:06.382 [2024-07-24 20:08:54.093516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.382 [2024-07-24 20:08:54.093543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.382 qpair failed and we were unable to recover it. 00:29:06.382 [2024-07-24 20:08:54.093998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.383 [2024-07-24 20:08:54.094006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.383 qpair failed and we were unable to recover it. 00:29:06.383 [2024-07-24 20:08:54.094441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.383 [2024-07-24 20:08:54.094469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.383 qpair failed and we were unable to recover it. 00:29:06.383 [2024-07-24 20:08:54.094884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.383 [2024-07-24 20:08:54.094893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.383 qpair failed and we were unable to recover it. 00:29:06.383 [2024-07-24 20:08:54.095407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.383 [2024-07-24 20:08:54.095438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.383 qpair failed and we were unable to recover it. 00:29:06.383 [2024-07-24 20:08:54.095869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.383 [2024-07-24 20:08:54.095878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.383 qpair failed and we were unable to recover it. 00:29:06.383 [2024-07-24 20:08:54.096375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.383 [2024-07-24 20:08:54.096382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.383 qpair failed and we were unable to recover it. 00:29:06.383 [2024-07-24 20:08:54.096825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.383 [2024-07-24 20:08:54.096831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.383 qpair failed and we were unable to recover it. 00:29:06.383 [2024-07-24 20:08:54.097244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.383 [2024-07-24 20:08:54.097251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.383 qpair failed and we were unable to recover it. 00:29:06.383 [2024-07-24 20:08:54.097678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.383 [2024-07-24 20:08:54.097684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.383 qpair failed and we were unable to recover it. 00:29:06.383 [2024-07-24 20:08:54.098087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.383 [2024-07-24 20:08:54.098094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.383 qpair failed and we were unable to recover it. 00:29:06.383 [2024-07-24 20:08:54.098500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.383 [2024-07-24 20:08:54.098507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.383 qpair failed and we were unable to recover it. 00:29:06.383 [2024-07-24 20:08:54.098899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.383 [2024-07-24 20:08:54.098905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.383 qpair failed and we were unable to recover it. 00:29:06.383 [2024-07-24 20:08:54.099298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.383 [2024-07-24 20:08:54.099306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.383 qpair failed and we were unable to recover it. 00:29:06.383 [2024-07-24 20:08:54.099729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.383 [2024-07-24 20:08:54.099736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.383 qpair failed and we were unable to recover it. 00:29:06.383 [2024-07-24 20:08:54.100187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.383 [2024-07-24 20:08:54.100195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.383 qpair failed and we were unable to recover it. 00:29:06.383 [2024-07-24 20:08:54.100640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.383 [2024-07-24 20:08:54.100647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.383 qpair failed and we were unable to recover it. 00:29:06.383 [2024-07-24 20:08:54.101094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.383 [2024-07-24 20:08:54.101100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.383 qpair failed and we were unable to recover it. 00:29:06.383 [2024-07-24 20:08:54.101530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.383 [2024-07-24 20:08:54.101537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.383 qpair failed and we were unable to recover it. 00:29:06.383 [2024-07-24 20:08:54.101937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.383 [2024-07-24 20:08:54.101944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.383 qpair failed and we were unable to recover it. 00:29:06.383 [2024-07-24 20:08:54.102480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.383 [2024-07-24 20:08:54.102507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.383 qpair failed and we were unable to recover it. 00:29:06.383 [2024-07-24 20:08:54.102926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.383 [2024-07-24 20:08:54.102934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.383 qpair failed and we were unable to recover it. 00:29:06.383 [2024-07-24 20:08:54.103394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.383 [2024-07-24 20:08:54.103421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.383 qpair failed and we were unable to recover it. 00:29:06.383 [2024-07-24 20:08:54.103868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.383 [2024-07-24 20:08:54.103877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.383 qpair failed and we were unable to recover it. 00:29:06.383 [2024-07-24 20:08:54.104282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.383 [2024-07-24 20:08:54.104290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.383 qpair failed and we were unable to recover it. 00:29:06.383 [2024-07-24 20:08:54.104709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.383 [2024-07-24 20:08:54.104717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.383 qpair failed and we were unable to recover it. 00:29:06.383 [2024-07-24 20:08:54.105132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.383 [2024-07-24 20:08:54.105138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.383 qpair failed and we were unable to recover it. 00:29:06.383 [2024-07-24 20:08:54.105567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.383 [2024-07-24 20:08:54.105574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.383 qpair failed and we were unable to recover it. 00:29:06.383 [2024-07-24 20:08:54.106022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.383 [2024-07-24 20:08:54.106028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.383 qpair failed and we were unable to recover it. 00:29:06.383 [2024-07-24 20:08:54.106541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.383 [2024-07-24 20:08:54.106568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.383 qpair failed and we were unable to recover it. 00:29:06.383 [2024-07-24 20:08:54.106896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.383 [2024-07-24 20:08:54.106905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.383 qpair failed and we were unable to recover it. 00:29:06.383 [2024-07-24 20:08:54.107360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.383 [2024-07-24 20:08:54.107368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.384 qpair failed and we were unable to recover it. 00:29:06.384 [2024-07-24 20:08:54.107686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.384 [2024-07-24 20:08:54.107693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.384 qpair failed and we were unable to recover it. 00:29:06.384 [2024-07-24 20:08:54.108120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.384 [2024-07-24 20:08:54.108126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.384 qpair failed and we were unable to recover it. 00:29:06.384 [2024-07-24 20:08:54.108365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.384 [2024-07-24 20:08:54.108372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.384 qpair failed and we were unable to recover it. 00:29:06.384 [2024-07-24 20:08:54.108764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.384 [2024-07-24 20:08:54.108770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.384 qpair failed and we were unable to recover it. 00:29:06.384 [2024-07-24 20:08:54.109181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.384 [2024-07-24 20:08:54.109188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.384 qpair failed and we were unable to recover it. 00:29:06.384 [2024-07-24 20:08:54.109601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.384 [2024-07-24 20:08:54.109608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.384 qpair failed and we were unable to recover it. 00:29:06.384 [2024-07-24 20:08:54.110016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.384 [2024-07-24 20:08:54.110022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.384 qpair failed and we were unable to recover it. 00:29:06.384 [2024-07-24 20:08:54.110538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.384 [2024-07-24 20:08:54.110566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.384 qpair failed and we were unable to recover it. 00:29:06.384 [2024-07-24 20:08:54.111039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.384 [2024-07-24 20:08:54.111048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.384 qpair failed and we were unable to recover it. 00:29:06.384 [2024-07-24 20:08:54.111558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.384 [2024-07-24 20:08:54.111586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.384 qpair failed and we were unable to recover it. 00:29:06.384 [2024-07-24 20:08:54.112013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.384 [2024-07-24 20:08:54.112022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.384 qpair failed and we were unable to recover it. 00:29:06.384 [2024-07-24 20:08:54.112565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.384 [2024-07-24 20:08:54.112593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.384 qpair failed and we were unable to recover it. 00:29:06.384 [2024-07-24 20:08:54.113013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.384 [2024-07-24 20:08:54.113025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.384 qpair failed and we were unable to recover it. 00:29:06.384 [2024-07-24 20:08:54.113537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.384 [2024-07-24 20:08:54.113565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.384 qpair failed and we were unable to recover it. 00:29:06.384 [2024-07-24 20:08:54.113990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.384 [2024-07-24 20:08:54.113998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.384 qpair failed and we were unable to recover it. 00:29:06.384 [2024-07-24 20:08:54.114565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.384 [2024-07-24 20:08:54.114592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.384 qpair failed and we were unable to recover it. 00:29:06.384 [2024-07-24 20:08:54.115009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.384 [2024-07-24 20:08:54.115018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.384 qpair failed and we were unable to recover it. 00:29:06.384 [2024-07-24 20:08:54.115522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.384 [2024-07-24 20:08:54.115549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.384 qpair failed and we were unable to recover it. 00:29:06.384 [2024-07-24 20:08:54.115965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.384 [2024-07-24 20:08:54.115973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.384 qpair failed and we were unable to recover it. 00:29:06.384 [2024-07-24 20:08:54.116409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.384 [2024-07-24 20:08:54.116437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.384 qpair failed and we were unable to recover it. 00:29:06.384 [2024-07-24 20:08:54.116854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.384 [2024-07-24 20:08:54.116863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.384 qpair failed and we were unable to recover it. 00:29:06.384 [2024-07-24 20:08:54.117389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.384 [2024-07-24 20:08:54.117417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.384 qpair failed and we were unable to recover it. 00:29:06.384 [2024-07-24 20:08:54.117864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.384 [2024-07-24 20:08:54.117872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.384 qpair failed and we were unable to recover it. 00:29:06.384 [2024-07-24 20:08:54.118317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.384 [2024-07-24 20:08:54.118325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.384 qpair failed and we were unable to recover it. 00:29:06.384 [2024-07-24 20:08:54.118735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.384 [2024-07-24 20:08:54.118743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.384 qpair failed and we were unable to recover it. 00:29:06.384 [2024-07-24 20:08:54.119206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.384 [2024-07-24 20:08:54.119214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.384 qpair failed and we were unable to recover it. 00:29:06.384 [2024-07-24 20:08:54.119636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.384 [2024-07-24 20:08:54.119644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.384 qpair failed and we were unable to recover it. 00:29:06.384 [2024-07-24 20:08:54.120033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.384 [2024-07-24 20:08:54.120040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.384 qpair failed and we were unable to recover it. 00:29:06.384 [2024-07-24 20:08:54.120535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.384 [2024-07-24 20:08:54.120542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.384 qpair failed and we were unable to recover it. 00:29:06.384 [2024-07-24 20:08:54.120867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.384 [2024-07-24 20:08:54.120874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.384 qpair failed and we were unable to recover it. 00:29:06.384 [2024-07-24 20:08:54.121303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.384 [2024-07-24 20:08:54.121310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.384 qpair failed and we were unable to recover it. 00:29:06.384 [2024-07-24 20:08:54.121735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.384 [2024-07-24 20:08:54.121750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.384 qpair failed and we were unable to recover it. 00:29:06.384 [2024-07-24 20:08:54.122293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.384 [2024-07-24 20:08:54.122300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.384 qpair failed and we were unable to recover it. 00:29:06.384 [2024-07-24 20:08:54.122678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.384 [2024-07-24 20:08:54.122684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.384 qpair failed and we were unable to recover it. 00:29:06.384 [2024-07-24 20:08:54.123102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.384 [2024-07-24 20:08:54.123108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.385 qpair failed and we were unable to recover it. 00:29:06.385 [2024-07-24 20:08:54.123548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.385 [2024-07-24 20:08:54.123555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.385 qpair failed and we were unable to recover it. 00:29:06.385 [2024-07-24 20:08:54.123988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.385 [2024-07-24 20:08:54.123995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.385 qpair failed and we were unable to recover it. 00:29:06.385 [2024-07-24 20:08:54.124421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.385 [2024-07-24 20:08:54.124429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.385 qpair failed and we were unable to recover it. 00:29:06.385 [2024-07-24 20:08:54.124848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.385 [2024-07-24 20:08:54.124855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.385 qpair failed and we were unable to recover it. 00:29:06.385 [2024-07-24 20:08:54.125253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.385 [2024-07-24 20:08:54.125260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.385 qpair failed and we were unable to recover it. 00:29:06.385 [2024-07-24 20:08:54.125669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.385 [2024-07-24 20:08:54.125676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.385 qpair failed and we were unable to recover it. 00:29:06.385 [2024-07-24 20:08:54.126072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.385 [2024-07-24 20:08:54.126079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.385 qpair failed and we were unable to recover it. 00:29:06.385 [2024-07-24 20:08:54.126555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.385 [2024-07-24 20:08:54.126562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.385 qpair failed and we were unable to recover it. 00:29:06.385 [2024-07-24 20:08:54.127003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.385 [2024-07-24 20:08:54.127009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.385 qpair failed and we were unable to recover it. 00:29:06.385 [2024-07-24 20:08:54.127518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.385 [2024-07-24 20:08:54.127546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.385 qpair failed and we were unable to recover it. 00:29:06.385 [2024-07-24 20:08:54.127968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.385 [2024-07-24 20:08:54.127977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.385 qpair failed and we were unable to recover it. 00:29:06.385 [2024-07-24 20:08:54.128474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.385 [2024-07-24 20:08:54.128502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.385 qpair failed and we were unable to recover it. 00:29:06.385 [2024-07-24 20:08:54.128953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.385 [2024-07-24 20:08:54.128962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.385 qpair failed and we were unable to recover it. 00:29:06.385 [2024-07-24 20:08:54.129174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.385 [2024-07-24 20:08:54.129183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.385 qpair failed and we were unable to recover it. 00:29:06.385 [2024-07-24 20:08:54.129609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.385 [2024-07-24 20:08:54.129616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.385 qpair failed and we were unable to recover it. 00:29:06.385 [2024-07-24 20:08:54.130025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.385 [2024-07-24 20:08:54.130032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.385 qpair failed and we were unable to recover it. 00:29:06.385 [2024-07-24 20:08:54.130547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.385 [2024-07-24 20:08:54.130574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.385 qpair failed and we were unable to recover it. 00:29:06.385 [2024-07-24 20:08:54.130997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.385 [2024-07-24 20:08:54.131011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.385 qpair failed and we were unable to recover it. 00:29:06.385 [2024-07-24 20:08:54.131580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.385 [2024-07-24 20:08:54.131609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.385 qpair failed and we were unable to recover it. 00:29:06.385 [2024-07-24 20:08:54.132047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.385 [2024-07-24 20:08:54.132055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.385 qpair failed and we were unable to recover it. 00:29:06.385 [2024-07-24 20:08:54.132419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.385 [2024-07-24 20:08:54.132446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.385 qpair failed and we were unable to recover it. 00:29:06.385 [2024-07-24 20:08:54.132917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.385 [2024-07-24 20:08:54.132926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.385 qpair failed and we were unable to recover it. 00:29:06.385 [2024-07-24 20:08:54.133434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.385 [2024-07-24 20:08:54.133461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.385 qpair failed and we were unable to recover it. 00:29:06.385 [2024-07-24 20:08:54.133877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.385 [2024-07-24 20:08:54.133886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.385 qpair failed and we were unable to recover it. 00:29:06.385 [2024-07-24 20:08:54.134403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.385 [2024-07-24 20:08:54.134431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.385 qpair failed and we were unable to recover it. 00:29:06.385 [2024-07-24 20:08:54.134923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.385 [2024-07-24 20:08:54.134932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.385 qpair failed and we were unable to recover it. 00:29:06.385 [2024-07-24 20:08:54.135404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.385 [2024-07-24 20:08:54.135432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.385 qpair failed and we were unable to recover it. 00:29:06.385 [2024-07-24 20:08:54.135761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.385 [2024-07-24 20:08:54.135770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.385 qpair failed and we were unable to recover it. 00:29:06.385 [2024-07-24 20:08:54.136210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.385 [2024-07-24 20:08:54.136218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.385 qpair failed and we were unable to recover it. 00:29:06.385 [2024-07-24 20:08:54.136641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.385 [2024-07-24 20:08:54.136647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.385 qpair failed and we were unable to recover it. 00:29:06.385 [2024-07-24 20:08:54.137066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.385 [2024-07-24 20:08:54.137073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.385 qpair failed and we were unable to recover it. 00:29:06.385 [2024-07-24 20:08:54.137530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.385 [2024-07-24 20:08:54.137537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.385 qpair failed and we were unable to recover it. 00:29:06.385 [2024-07-24 20:08:54.137951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.385 [2024-07-24 20:08:54.137958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.385 qpair failed and we were unable to recover it. 00:29:06.385 [2024-07-24 20:08:54.138474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.385 [2024-07-24 20:08:54.138502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.385 qpair failed and we were unable to recover it. 00:29:06.385 [2024-07-24 20:08:54.138923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.385 [2024-07-24 20:08:54.138931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.385 qpair failed and we were unable to recover it. 00:29:06.385 [2024-07-24 20:08:54.139441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.385 [2024-07-24 20:08:54.139469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.385 qpair failed and we were unable to recover it. 00:29:06.385 [2024-07-24 20:08:54.139913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.385 [2024-07-24 20:08:54.139922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.385 qpair failed and we were unable to recover it. 00:29:06.385 [2024-07-24 20:08:54.140353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.386 [2024-07-24 20:08:54.140360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.386 qpair failed and we were unable to recover it. 00:29:06.386 [2024-07-24 20:08:54.140787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.386 [2024-07-24 20:08:54.140794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.386 qpair failed and we were unable to recover it. 00:29:06.386 [2024-07-24 20:08:54.141246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.386 [2024-07-24 20:08:54.141253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.386 qpair failed and we were unable to recover it. 00:29:06.386 [2024-07-24 20:08:54.141690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.386 [2024-07-24 20:08:54.141696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.386 qpair failed and we were unable to recover it. 00:29:06.386 [2024-07-24 20:08:54.142211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.386 [2024-07-24 20:08:54.142218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.386 qpair failed and we were unable to recover it. 00:29:06.386 [2024-07-24 20:08:54.142628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.386 [2024-07-24 20:08:54.142634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.386 qpair failed and we were unable to recover it. 00:29:06.386 [2024-07-24 20:08:54.142969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.386 [2024-07-24 20:08:54.142976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.386 qpair failed and we were unable to recover it. 00:29:06.386 [2024-07-24 20:08:54.143388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.386 [2024-07-24 20:08:54.143396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.386 qpair failed and we were unable to recover it. 00:29:06.386 [2024-07-24 20:08:54.143816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.386 [2024-07-24 20:08:54.143822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.386 qpair failed and we were unable to recover it. 00:29:06.386 [2024-07-24 20:08:54.144264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.386 [2024-07-24 20:08:54.144271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.386 qpair failed and we were unable to recover it. 00:29:06.386 [2024-07-24 20:08:54.144677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.386 [2024-07-24 20:08:54.144683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.386 qpair failed and we were unable to recover it. 00:29:06.386 [2024-07-24 20:08:54.145086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.386 [2024-07-24 20:08:54.145092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.386 qpair failed and we were unable to recover it. 00:29:06.386 [2024-07-24 20:08:54.145527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.386 [2024-07-24 20:08:54.145534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.386 qpair failed and we were unable to recover it. 00:29:06.386 [2024-07-24 20:08:54.145931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.386 [2024-07-24 20:08:54.145937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.386 qpair failed and we were unable to recover it. 00:29:06.386 [2024-07-24 20:08:54.146402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.386 [2024-07-24 20:08:54.146409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.386 qpair failed and we were unable to recover it. 00:29:06.386 [2024-07-24 20:08:54.146814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.386 [2024-07-24 20:08:54.146821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.386 qpair failed and we were unable to recover it. 00:29:06.386 [2024-07-24 20:08:54.147220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.386 [2024-07-24 20:08:54.147228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.386 qpair failed and we were unable to recover it. 00:29:06.386 [2024-07-24 20:08:54.147504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.386 [2024-07-24 20:08:54.147510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.386 qpair failed and we were unable to recover it. 00:29:06.386 [2024-07-24 20:08:54.147832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.386 [2024-07-24 20:08:54.147839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.386 qpair failed and we were unable to recover it. 00:29:06.386 [2024-07-24 20:08:54.148281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.386 [2024-07-24 20:08:54.148288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.386 qpair failed and we were unable to recover it. 00:29:06.386 [2024-07-24 20:08:54.148708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.386 [2024-07-24 20:08:54.148716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.386 qpair failed and we were unable to recover it. 00:29:06.386 [2024-07-24 20:08:54.149030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.386 [2024-07-24 20:08:54.149038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.386 qpair failed and we were unable to recover it. 00:29:06.386 [2024-07-24 20:08:54.149457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.386 [2024-07-24 20:08:54.149464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.386 qpair failed and we were unable to recover it. 00:29:06.386 [2024-07-24 20:08:54.149865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.386 [2024-07-24 20:08:54.149872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.386 qpair failed and we were unable to recover it. 00:29:06.386 [2024-07-24 20:08:54.150270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.386 [2024-07-24 20:08:54.150278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.386 qpair failed and we were unable to recover it. 00:29:06.386 [2024-07-24 20:08:54.150682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.386 [2024-07-24 20:08:54.150689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.386 qpair failed and we were unable to recover it. 00:29:06.386 [2024-07-24 20:08:54.151101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.386 [2024-07-24 20:08:54.151108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.386 qpair failed and we were unable to recover it. 00:29:06.386 [2024-07-24 20:08:54.151530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.386 [2024-07-24 20:08:54.151537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.386 qpair failed and we were unable to recover it. 00:29:06.386 [2024-07-24 20:08:54.151972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.386 [2024-07-24 20:08:54.151978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.386 qpair failed and we were unable to recover it. 00:29:06.386 [2024-07-24 20:08:54.152377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.386 [2024-07-24 20:08:54.152384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.386 qpair failed and we were unable to recover it. 00:29:06.386 [2024-07-24 20:08:54.152689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.386 [2024-07-24 20:08:54.152695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.386 qpair failed and we were unable to recover it. 00:29:06.386 [2024-07-24 20:08:54.153137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.387 [2024-07-24 20:08:54.153144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.387 qpair failed and we were unable to recover it. 00:29:06.387 [2024-07-24 20:08:54.153471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.387 [2024-07-24 20:08:54.153478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.387 qpair failed and we were unable to recover it. 00:29:06.387 [2024-07-24 20:08:54.153906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.387 [2024-07-24 20:08:54.153913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.387 qpair failed and we were unable to recover it. 00:29:06.387 [2024-07-24 20:08:54.154244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.387 [2024-07-24 20:08:54.154252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.387 qpair failed and we were unable to recover it. 00:29:06.387 [2024-07-24 20:08:54.154678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.387 [2024-07-24 20:08:54.154684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.387 qpair failed and we were unable to recover it. 00:29:06.387 [2024-07-24 20:08:54.154998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.387 [2024-07-24 20:08:54.155006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.387 qpair failed and we were unable to recover it. 00:29:06.387 [2024-07-24 20:08:54.155441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.387 [2024-07-24 20:08:54.155448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.387 qpair failed and we were unable to recover it. 00:29:06.387 [2024-07-24 20:08:54.155862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.387 [2024-07-24 20:08:54.155869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.387 qpair failed and we were unable to recover it. 00:29:06.387 [2024-07-24 20:08:54.156274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.387 [2024-07-24 20:08:54.156281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.387 qpair failed and we were unable to recover it. 00:29:06.387 [2024-07-24 20:08:54.156588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.387 [2024-07-24 20:08:54.156595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.387 qpair failed and we were unable to recover it. 00:29:06.387 [2024-07-24 20:08:54.157017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.387 [2024-07-24 20:08:54.157023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.387 qpair failed and we were unable to recover it. 00:29:06.387 [2024-07-24 20:08:54.157434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.387 [2024-07-24 20:08:54.157440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.387 qpair failed and we were unable to recover it. 00:29:06.387 [2024-07-24 20:08:54.157886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.387 [2024-07-24 20:08:54.157892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.387 qpair failed and we were unable to recover it. 00:29:06.387 [2024-07-24 20:08:54.158291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.387 [2024-07-24 20:08:54.158298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.387 qpair failed and we were unable to recover it. 00:29:06.387 [2024-07-24 20:08:54.158517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.387 [2024-07-24 20:08:54.158528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.387 qpair failed and we were unable to recover it. 00:29:06.387 [2024-07-24 20:08:54.158923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.387 [2024-07-24 20:08:54.158930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.387 qpair failed and we were unable to recover it. 00:29:06.387 [2024-07-24 20:08:54.159330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.387 [2024-07-24 20:08:54.159337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.387 qpair failed and we were unable to recover it. 00:29:06.387 [2024-07-24 20:08:54.159741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.387 [2024-07-24 20:08:54.159747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.387 qpair failed and we were unable to recover it. 00:29:06.387 [2024-07-24 20:08:54.159949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.387 [2024-07-24 20:08:54.159957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.387 qpair failed and we were unable to recover it. 00:29:06.387 [2024-07-24 20:08:54.160352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.387 [2024-07-24 20:08:54.160359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.387 qpair failed and we were unable to recover it. 00:29:06.387 [2024-07-24 20:08:54.160557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.387 [2024-07-24 20:08:54.160565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.387 qpair failed and we were unable to recover it. 00:29:06.387 [2024-07-24 20:08:54.161008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.387 [2024-07-24 20:08:54.161014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.387 qpair failed and we were unable to recover it. 00:29:06.387 [2024-07-24 20:08:54.161412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.387 [2024-07-24 20:08:54.161419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.387 qpair failed and we were unable to recover it. 00:29:06.387 [2024-07-24 20:08:54.161862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.387 [2024-07-24 20:08:54.161869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.387 qpair failed and we were unable to recover it. 00:29:06.387 [2024-07-24 20:08:54.162308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.387 [2024-07-24 20:08:54.162315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.387 qpair failed and we were unable to recover it. 00:29:06.387 [2024-07-24 20:08:54.162760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.387 [2024-07-24 20:08:54.162767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.387 qpair failed and we were unable to recover it. 00:29:06.387 [2024-07-24 20:08:54.162961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.387 [2024-07-24 20:08:54.162969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.387 qpair failed and we were unable to recover it. 00:29:06.387 [2024-07-24 20:08:54.163428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.387 [2024-07-24 20:08:54.163436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.387 qpair failed and we were unable to recover it. 00:29:06.387 [2024-07-24 20:08:54.163837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.387 [2024-07-24 20:08:54.163844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.387 qpair failed and we were unable to recover it. 00:29:06.387 [2024-07-24 20:08:54.164271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.387 [2024-07-24 20:08:54.164280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.387 qpair failed and we were unable to recover it. 00:29:06.387 [2024-07-24 20:08:54.164697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.387 [2024-07-24 20:08:54.164703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.387 qpair failed and we were unable to recover it. 00:29:06.387 [2024-07-24 20:08:54.165143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.387 [2024-07-24 20:08:54.165149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.387 qpair failed and we were unable to recover it. 00:29:06.387 [2024-07-24 20:08:54.165557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.387 [2024-07-24 20:08:54.165563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.387 qpair failed and we were unable to recover it. 00:29:06.387 [2024-07-24 20:08:54.166035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.387 [2024-07-24 20:08:54.166042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.387 qpair failed and we were unable to recover it. 00:29:06.387 [2024-07-24 20:08:54.166496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.387 [2024-07-24 20:08:54.166503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.387 qpair failed and we were unable to recover it. 00:29:06.387 [2024-07-24 20:08:54.166902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.387 [2024-07-24 20:08:54.166909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.387 qpair failed and we were unable to recover it. 00:29:06.387 [2024-07-24 20:08:54.167463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.387 [2024-07-24 20:08:54.167491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.387 qpair failed and we were unable to recover it. 00:29:06.387 [2024-07-24 20:08:54.167901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.388 [2024-07-24 20:08:54.167910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.388 qpair failed and we were unable to recover it. 00:29:06.388 [2024-07-24 20:08:54.168362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.388 [2024-07-24 20:08:54.168370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.388 qpair failed and we were unable to recover it. 00:29:06.388 [2024-07-24 20:08:54.168793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.388 [2024-07-24 20:08:54.168799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.388 qpair failed and we were unable to recover it. 00:29:06.388 [2024-07-24 20:08:54.169247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.388 [2024-07-24 20:08:54.169254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.388 qpair failed and we were unable to recover it. 00:29:06.388 [2024-07-24 20:08:54.169677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.388 [2024-07-24 20:08:54.169684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.388 qpair failed and we were unable to recover it. 00:29:06.388 [2024-07-24 20:08:54.170092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.388 [2024-07-24 20:08:54.170098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.388 qpair failed and we were unable to recover it. 00:29:06.388 [2024-07-24 20:08:54.170528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.388 [2024-07-24 20:08:54.170535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.388 qpair failed and we were unable to recover it. 00:29:06.388 [2024-07-24 20:08:54.170937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.388 [2024-07-24 20:08:54.170944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.388 qpair failed and we were unable to recover it. 00:29:06.388 [2024-07-24 20:08:54.171220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.388 [2024-07-24 20:08:54.171228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.388 qpair failed and we were unable to recover it. 00:29:06.388 [2024-07-24 20:08:54.171733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.388 [2024-07-24 20:08:54.171739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.388 qpair failed and we were unable to recover it. 00:29:06.388 [2024-07-24 20:08:54.172139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.388 [2024-07-24 20:08:54.172145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.388 qpair failed and we were unable to recover it. 00:29:06.388 [2024-07-24 20:08:54.172561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.388 [2024-07-24 20:08:54.172569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.388 qpair failed and we were unable to recover it. 00:29:06.388 [2024-07-24 20:08:54.172984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.388 [2024-07-24 20:08:54.172991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.388 qpair failed and we were unable to recover it. 00:29:06.388 [2024-07-24 20:08:54.173404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.388 [2024-07-24 20:08:54.173411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.388 qpair failed and we were unable to recover it. 00:29:06.388 [2024-07-24 20:08:54.173869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.388 [2024-07-24 20:08:54.173876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.388 qpair failed and we were unable to recover it. 00:29:06.388 [2024-07-24 20:08:54.174384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.388 [2024-07-24 20:08:54.174411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.388 qpair failed and we were unable to recover it. 00:29:06.388 [2024-07-24 20:08:54.174884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.388 [2024-07-24 20:08:54.174893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.388 qpair failed and we were unable to recover it. 00:29:06.388 [2024-07-24 20:08:54.175304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.388 [2024-07-24 20:08:54.175311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.388 qpair failed and we were unable to recover it. 00:29:06.388 [2024-07-24 20:08:54.175526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.388 [2024-07-24 20:08:54.175535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.388 qpair failed and we were unable to recover it. 00:29:06.388 [2024-07-24 20:08:54.175999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.388 [2024-07-24 20:08:54.176006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.388 qpair failed and we were unable to recover it. 00:29:06.388 [2024-07-24 20:08:54.176290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.388 [2024-07-24 20:08:54.176298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.388 qpair failed and we were unable to recover it. 00:29:06.388 [2024-07-24 20:08:54.176734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.388 [2024-07-24 20:08:54.176741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.388 qpair failed and we were unable to recover it. 00:29:06.388 [2024-07-24 20:08:54.177139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.388 [2024-07-24 20:08:54.177145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.388 qpair failed and we were unable to recover it. 00:29:06.388 [2024-07-24 20:08:54.177554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.388 [2024-07-24 20:08:54.177561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.388 qpair failed and we were unable to recover it. 00:29:06.388 [2024-07-24 20:08:54.177960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.388 [2024-07-24 20:08:54.177967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.388 qpair failed and we were unable to recover it. 00:29:06.388 [2024-07-24 20:08:54.178363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.388 [2024-07-24 20:08:54.178370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.388 qpair failed and we were unable to recover it. 00:29:06.388 [2024-07-24 20:08:54.178858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.388 [2024-07-24 20:08:54.178865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.388 qpair failed and we were unable to recover it. 00:29:06.388 [2024-07-24 20:08:54.179408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.388 [2024-07-24 20:08:54.179415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.388 qpair failed and we were unable to recover it. 00:29:06.388 [2024-07-24 20:08:54.179812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.388 [2024-07-24 20:08:54.179819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.388 qpair failed and we were unable to recover it. 00:29:06.388 [2024-07-24 20:08:54.180033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.388 [2024-07-24 20:08:54.180042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.388 qpair failed and we were unable to recover it. 00:29:06.388 [2024-07-24 20:08:54.180483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.388 [2024-07-24 20:08:54.180491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.388 qpair failed and we were unable to recover it. 00:29:06.388 [2024-07-24 20:08:54.180817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.388 [2024-07-24 20:08:54.180824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.388 qpair failed and we were unable to recover it. 00:29:06.388 [2024-07-24 20:08:54.181148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.389 [2024-07-24 20:08:54.181157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.389 qpair failed and we were unable to recover it. 00:29:06.389 [2024-07-24 20:08:54.181572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.389 [2024-07-24 20:08:54.181579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.389 qpair failed and we were unable to recover it. 00:29:06.389 [2024-07-24 20:08:54.181983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.389 [2024-07-24 20:08:54.181990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.389 qpair failed and we were unable to recover it. 00:29:06.389 [2024-07-24 20:08:54.182502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.389 [2024-07-24 20:08:54.182529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.389 qpair failed and we were unable to recover it. 00:29:06.389 [2024-07-24 20:08:54.182968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.389 [2024-07-24 20:08:54.182976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.389 qpair failed and we were unable to recover it. 00:29:06.389 [2024-07-24 20:08:54.183495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.389 [2024-07-24 20:08:54.183522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.389 qpair failed and we were unable to recover it. 00:29:06.389 [2024-07-24 20:08:54.183844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.389 [2024-07-24 20:08:54.183852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.389 qpair failed and we were unable to recover it. 00:29:06.389 [2024-07-24 20:08:54.184289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.389 [2024-07-24 20:08:54.184296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.389 qpair failed and we were unable to recover it. 00:29:06.389 [2024-07-24 20:08:54.184622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.389 [2024-07-24 20:08:54.184628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.389 qpair failed and we were unable to recover it. 00:29:06.389 [2024-07-24 20:08:54.185060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.389 [2024-07-24 20:08:54.185067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.389 qpair failed and we were unable to recover it. 00:29:06.389 [2024-07-24 20:08:54.185382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.389 [2024-07-24 20:08:54.185389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.389 qpair failed and we were unable to recover it. 00:29:06.389 [2024-07-24 20:08:54.185886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.389 [2024-07-24 20:08:54.185893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.389 qpair failed and we were unable to recover it. 00:29:06.389 [2024-07-24 20:08:54.186313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.389 [2024-07-24 20:08:54.186320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.389 qpair failed and we were unable to recover it. 00:29:06.389 [2024-07-24 20:08:54.186718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.389 [2024-07-24 20:08:54.186725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.389 qpair failed and we were unable to recover it. 00:29:06.389 [2024-07-24 20:08:54.187040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.389 [2024-07-24 20:08:54.187046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.389 qpair failed and we were unable to recover it. 00:29:06.389 [2024-07-24 20:08:54.187485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.389 [2024-07-24 20:08:54.187492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.389 qpair failed and we were unable to recover it. 00:29:06.389 [2024-07-24 20:08:54.187890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.389 [2024-07-24 20:08:54.187897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.389 qpair failed and we were unable to recover it. 00:29:06.389 [2024-07-24 20:08:54.188403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.389 [2024-07-24 20:08:54.188430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.389 qpair failed and we were unable to recover it. 00:29:06.389 [2024-07-24 20:08:54.188846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.389 [2024-07-24 20:08:54.188854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.389 qpair failed and we were unable to recover it. 00:29:06.389 [2024-07-24 20:08:54.189065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.389 [2024-07-24 20:08:54.189072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.389 qpair failed and we were unable to recover it. 00:29:06.389 [2024-07-24 20:08:54.189489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.389 [2024-07-24 20:08:54.189496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.389 qpair failed and we were unable to recover it. 00:29:06.389 [2024-07-24 20:08:54.189942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.389 [2024-07-24 20:08:54.189949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.389 qpair failed and we were unable to recover it. 00:29:06.389 [2024-07-24 20:08:54.190477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.389 [2024-07-24 20:08:54.190505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.389 qpair failed and we were unable to recover it. 00:29:06.389 [2024-07-24 20:08:54.190956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.389 [2024-07-24 20:08:54.190964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.389 qpair failed and we were unable to recover it. 00:29:06.389 [2024-07-24 20:08:54.191513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.389 [2024-07-24 20:08:54.191541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.389 qpair failed and we were unable to recover it. 00:29:06.389 [2024-07-24 20:08:54.191953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.389 [2024-07-24 20:08:54.191962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.389 qpair failed and we were unable to recover it. 00:29:06.389 [2024-07-24 20:08:54.192479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.389 [2024-07-24 20:08:54.192507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.389 qpair failed and we were unable to recover it. 00:29:06.389 [2024-07-24 20:08:54.192924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.389 [2024-07-24 20:08:54.192932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.389 qpair failed and we were unable to recover it. 00:29:06.389 [2024-07-24 20:08:54.193435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.389 [2024-07-24 20:08:54.193462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.389 qpair failed and we were unable to recover it. 00:29:06.389 [2024-07-24 20:08:54.193878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.389 [2024-07-24 20:08:54.193886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.389 qpair failed and we were unable to recover it. 00:29:06.389 [2024-07-24 20:08:54.194282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.389 [2024-07-24 20:08:54.194289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.389 qpair failed and we were unable to recover it. 00:29:06.389 [2024-07-24 20:08:54.194710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.389 [2024-07-24 20:08:54.194717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.389 qpair failed and we were unable to recover it. 00:29:06.389 [2024-07-24 20:08:54.195131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.389 [2024-07-24 20:08:54.195138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.389 qpair failed and we were unable to recover it. 00:29:06.389 [2024-07-24 20:08:54.195555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.389 [2024-07-24 20:08:54.195563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.389 qpair failed and we were unable to recover it. 00:29:06.389 [2024-07-24 20:08:54.195982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.389 [2024-07-24 20:08:54.195989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.389 qpair failed and we were unable to recover it. 00:29:06.389 [2024-07-24 20:08:54.196429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.390 [2024-07-24 20:08:54.196436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.390 qpair failed and we were unable to recover it. 00:29:06.390 [2024-07-24 20:08:54.196832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.390 [2024-07-24 20:08:54.196840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.390 qpair failed and we were unable to recover it. 00:29:06.390 [2024-07-24 20:08:54.197393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.390 [2024-07-24 20:08:54.197421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.390 qpair failed and we were unable to recover it. 00:29:06.390 [2024-07-24 20:08:54.197838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.390 [2024-07-24 20:08:54.197846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.390 qpair failed and we were unable to recover it. 00:29:06.390 [2024-07-24 20:08:54.198247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.390 [2024-07-24 20:08:54.198254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.390 qpair failed and we were unable to recover it. 00:29:06.390 [2024-07-24 20:08:54.198675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.390 [2024-07-24 20:08:54.198685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.390 qpair failed and we were unable to recover it. 00:29:06.390 [2024-07-24 20:08:54.198962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.390 [2024-07-24 20:08:54.198971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.390 qpair failed and we were unable to recover it. 00:29:06.390 [2024-07-24 20:08:54.199296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.390 [2024-07-24 20:08:54.199304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.390 qpair failed and we were unable to recover it. 00:29:06.390 [2024-07-24 20:08:54.199728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.390 [2024-07-24 20:08:54.199735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.390 qpair failed and we were unable to recover it. 00:29:06.390 [2024-07-24 20:08:54.200065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.390 [2024-07-24 20:08:54.200073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.390 qpair failed and we were unable to recover it. 00:29:06.390 [2024-07-24 20:08:54.200495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.390 [2024-07-24 20:08:54.200502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.390 qpair failed and we were unable to recover it. 00:29:06.390 [2024-07-24 20:08:54.200905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.390 [2024-07-24 20:08:54.200911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.390 qpair failed and we were unable to recover it. 00:29:06.390 [2024-07-24 20:08:54.201281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.390 [2024-07-24 20:08:54.201289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.390 qpair failed and we were unable to recover it. 00:29:06.390 [2024-07-24 20:08:54.201720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.390 [2024-07-24 20:08:54.201726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.390 qpair failed and we were unable to recover it. 00:29:06.390 [2024-07-24 20:08:54.202153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.390 [2024-07-24 20:08:54.202161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.390 qpair failed and we were unable to recover it. 00:29:06.390 [2024-07-24 20:08:54.202478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.390 [2024-07-24 20:08:54.202485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.390 qpair failed and we were unable to recover it. 00:29:06.390 [2024-07-24 20:08:54.202887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.390 [2024-07-24 20:08:54.202894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.390 qpair failed and we were unable to recover it. 00:29:06.390 [2024-07-24 20:08:54.203294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.390 [2024-07-24 20:08:54.203301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.390 qpair failed and we were unable to recover it. 00:29:06.390 [2024-07-24 20:08:54.203731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.390 [2024-07-24 20:08:54.203737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.390 qpair failed and we were unable to recover it. 00:29:06.390 [2024-07-24 20:08:54.204216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.390 [2024-07-24 20:08:54.204224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.390 qpair failed and we were unable to recover it. 00:29:06.390 [2024-07-24 20:08:54.204424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.390 [2024-07-24 20:08:54.204434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.390 qpair failed and we were unable to recover it. 00:29:06.390 [2024-07-24 20:08:54.204874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.390 [2024-07-24 20:08:54.204880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.390 qpair failed and we were unable to recover it. 00:29:06.390 [2024-07-24 20:08:54.205082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.390 [2024-07-24 20:08:54.205090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.390 qpair failed and we were unable to recover it. 00:29:06.390 [2024-07-24 20:08:54.205499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.390 [2024-07-24 20:08:54.205506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.390 qpair failed and we were unable to recover it. 00:29:06.390 [2024-07-24 20:08:54.205910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.390 [2024-07-24 20:08:54.205916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.390 qpair failed and we were unable to recover it. 00:29:06.390 [2024-07-24 20:08:54.206346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.390 [2024-07-24 20:08:54.206353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.390 qpair failed and we were unable to recover it. 00:29:06.390 [2024-07-24 20:08:54.206526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.390 [2024-07-24 20:08:54.206534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.390 qpair failed and we were unable to recover it. 00:29:06.390 [2024-07-24 20:08:54.206963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.390 [2024-07-24 20:08:54.206970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.390 qpair failed and we were unable to recover it. 00:29:06.390 [2024-07-24 20:08:54.207432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.390 [2024-07-24 20:08:54.207438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.390 qpair failed and we were unable to recover it. 00:29:06.390 [2024-07-24 20:08:54.207794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.390 [2024-07-24 20:08:54.207800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.390 qpair failed and we were unable to recover it. 00:29:06.390 [2024-07-24 20:08:54.208208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.390 [2024-07-24 20:08:54.208215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.390 qpair failed and we were unable to recover it. 00:29:06.390 [2024-07-24 20:08:54.208398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.390 [2024-07-24 20:08:54.208405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.390 qpair failed and we were unable to recover it. 00:29:06.390 [2024-07-24 20:08:54.208897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.390 [2024-07-24 20:08:54.208904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.390 qpair failed and we were unable to recover it. 00:29:06.390 [2024-07-24 20:08:54.209302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.390 [2024-07-24 20:08:54.209309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.390 qpair failed and we were unable to recover it. 00:29:06.390 [2024-07-24 20:08:54.209709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.390 [2024-07-24 20:08:54.209716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.390 qpair failed and we were unable to recover it. 00:29:06.390 [2024-07-24 20:08:54.210157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.390 [2024-07-24 20:08:54.210163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.390 qpair failed and we were unable to recover it. 00:29:06.390 [2024-07-24 20:08:54.210569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.390 [2024-07-24 20:08:54.210576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.390 qpair failed and we were unable to recover it. 00:29:06.390 [2024-07-24 20:08:54.211055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.390 [2024-07-24 20:08:54.211063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.390 qpair failed and we were unable to recover it. 00:29:06.390 [2024-07-24 20:08:54.211615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.391 [2024-07-24 20:08:54.211642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.391 qpair failed and we were unable to recover it. 00:29:06.391 [2024-07-24 20:08:54.212059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.391 [2024-07-24 20:08:54.212067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.391 qpair failed and we were unable to recover it. 00:29:06.391 [2024-07-24 20:08:54.212568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.391 [2024-07-24 20:08:54.212594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.391 qpair failed and we were unable to recover it. 00:29:06.391 [2024-07-24 20:08:54.213009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.391 [2024-07-24 20:08:54.213017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.391 qpair failed and we were unable to recover it. 00:29:06.391 [2024-07-24 20:08:54.213537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.391 [2024-07-24 20:08:54.213564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.391 qpair failed and we were unable to recover it. 00:29:06.391 [2024-07-24 20:08:54.213985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.391 [2024-07-24 20:08:54.213993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.391 qpair failed and we were unable to recover it. 00:29:06.391 [2024-07-24 20:08:54.214506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.391 [2024-07-24 20:08:54.214534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.391 qpair failed and we were unable to recover it. 00:29:06.391 [2024-07-24 20:08:54.214950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.391 [2024-07-24 20:08:54.214963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.391 qpair failed and we were unable to recover it. 00:29:06.391 [2024-07-24 20:08:54.215488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.391 [2024-07-24 20:08:54.215516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.391 qpair failed and we were unable to recover it. 00:29:06.391 [2024-07-24 20:08:54.215941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.391 [2024-07-24 20:08:54.215949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.391 qpair failed and we were unable to recover it. 00:29:06.391 [2024-07-24 20:08:54.216450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.391 [2024-07-24 20:08:54.216478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.391 qpair failed and we were unable to recover it. 00:29:06.391 [2024-07-24 20:08:54.216975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.391 [2024-07-24 20:08:54.216983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.391 qpair failed and we were unable to recover it. 00:29:06.391 [2024-07-24 20:08:54.217506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.391 [2024-07-24 20:08:54.217534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.391 qpair failed and we were unable to recover it. 00:29:06.391 [2024-07-24 20:08:54.217948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.391 [2024-07-24 20:08:54.217956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.391 qpair failed and we were unable to recover it. 00:29:06.391 [2024-07-24 20:08:54.218481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.391 [2024-07-24 20:08:54.218508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.391 qpair failed and we were unable to recover it. 00:29:06.391 [2024-07-24 20:08:54.218928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.391 [2024-07-24 20:08:54.218937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.391 qpair failed and we were unable to recover it. 00:29:06.391 [2024-07-24 20:08:54.219462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.391 [2024-07-24 20:08:54.219490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.391 qpair failed and we were unable to recover it. 00:29:06.391 [2024-07-24 20:08:54.219909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.391 [2024-07-24 20:08:54.219917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.391 qpair failed and we were unable to recover it. 00:29:06.391 [2024-07-24 20:08:54.220362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.391 [2024-07-24 20:08:54.220370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.391 qpair failed and we were unable to recover it. 00:29:06.391 [2024-07-24 20:08:54.220776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.391 [2024-07-24 20:08:54.220783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.391 qpair failed and we were unable to recover it. 00:29:06.391 [2024-07-24 20:08:54.221246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.391 [2024-07-24 20:08:54.221253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.391 qpair failed and we were unable to recover it. 00:29:06.391 [2024-07-24 20:08:54.221657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.391 [2024-07-24 20:08:54.221663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.391 qpair failed and we were unable to recover it. 00:29:06.391 [2024-07-24 20:08:54.222074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.391 [2024-07-24 20:08:54.222081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.391 qpair failed and we were unable to recover it. 00:29:06.391 [2024-07-24 20:08:54.222519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.391 [2024-07-24 20:08:54.222527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.391 qpair failed and we were unable to recover it. 00:29:06.391 [2024-07-24 20:08:54.222951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.391 [2024-07-24 20:08:54.222959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.391 qpair failed and we were unable to recover it. 00:29:06.391 [2024-07-24 20:08:54.223485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.391 [2024-07-24 20:08:54.223513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.391 qpair failed and we were unable to recover it. 00:29:06.391 [2024-07-24 20:08:54.223932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.391 [2024-07-24 20:08:54.223940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.391 qpair failed and we were unable to recover it. 00:29:06.391 [2024-07-24 20:08:54.224368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.391 [2024-07-24 20:08:54.224395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.391 qpair failed and we were unable to recover it. 00:29:06.391 [2024-07-24 20:08:54.224910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.391 [2024-07-24 20:08:54.224918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.391 qpair failed and we were unable to recover it. 00:29:06.391 [2024-07-24 20:08:54.225425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.391 [2024-07-24 20:08:54.225452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.391 qpair failed and we were unable to recover it. 00:29:06.391 [2024-07-24 20:08:54.225868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.391 [2024-07-24 20:08:54.225876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.391 qpair failed and we were unable to recover it. 00:29:06.391 [2024-07-24 20:08:54.226088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.391 [2024-07-24 20:08:54.226097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.391 qpair failed and we were unable to recover it. 00:29:06.391 [2024-07-24 20:08:54.226501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.391 [2024-07-24 20:08:54.226509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.391 qpair failed and we were unable to recover it. 00:29:06.391 [2024-07-24 20:08:54.226816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.391 [2024-07-24 20:08:54.226823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.391 qpair failed and we were unable to recover it. 00:29:06.391 [2024-07-24 20:08:54.227034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.391 [2024-07-24 20:08:54.227043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.391 qpair failed and we were unable to recover it. 00:29:06.391 [2024-07-24 20:08:54.227481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.391 [2024-07-24 20:08:54.227489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.391 qpair failed and we were unable to recover it. 00:29:06.391 [2024-07-24 20:08:54.227907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.391 [2024-07-24 20:08:54.227914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.391 qpair failed and we were unable to recover it. 00:29:06.391 [2024-07-24 20:08:54.228337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.391 [2024-07-24 20:08:54.228344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.391 qpair failed and we were unable to recover it. 00:29:06.392 [2024-07-24 20:08:54.228544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.392 [2024-07-24 20:08:54.228553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.392 qpair failed and we were unable to recover it. 00:29:06.392 [2024-07-24 20:08:54.228974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.392 [2024-07-24 20:08:54.228981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.392 qpair failed and we were unable to recover it. 00:29:06.392 [2024-07-24 20:08:54.229427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.392 [2024-07-24 20:08:54.229434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.392 qpair failed and we were unable to recover it. 00:29:06.392 [2024-07-24 20:08:54.229832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.392 [2024-07-24 20:08:54.229838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.392 qpair failed and we were unable to recover it. 00:29:06.392 [2024-07-24 20:08:54.230241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.392 [2024-07-24 20:08:54.230248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.392 qpair failed and we were unable to recover it. 00:29:06.392 [2024-07-24 20:08:54.230679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.392 [2024-07-24 20:08:54.230686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.392 qpair failed and we were unable to recover it. 00:29:06.392 [2024-07-24 20:08:54.231115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.392 [2024-07-24 20:08:54.231122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.392 qpair failed and we were unable to recover it. 00:29:06.392 [2024-07-24 20:08:54.231561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.392 [2024-07-24 20:08:54.231568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.392 qpair failed and we were unable to recover it. 00:29:06.392 [2024-07-24 20:08:54.231892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.392 [2024-07-24 20:08:54.231899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.392 qpair failed and we were unable to recover it. 00:29:06.392 [2024-07-24 20:08:54.232327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.392 [2024-07-24 20:08:54.232336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.392 qpair failed and we were unable to recover it. 00:29:06.392 [2024-07-24 20:08:54.232735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.392 [2024-07-24 20:08:54.232742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.392 qpair failed and we were unable to recover it. 00:29:06.392 [2024-07-24 20:08:54.233139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.392 [2024-07-24 20:08:54.233145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.392 qpair failed and we were unable to recover it. 00:29:06.392 [2024-07-24 20:08:54.233459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.392 [2024-07-24 20:08:54.233467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.392 qpair failed and we were unable to recover it. 00:29:06.392 [2024-07-24 20:08:54.233882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.392 [2024-07-24 20:08:54.233888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.392 qpair failed and we were unable to recover it. 00:29:06.392 [2024-07-24 20:08:54.234327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.392 [2024-07-24 20:08:54.234334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.392 qpair failed and we were unable to recover it. 00:29:06.392 [2024-07-24 20:08:54.234727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.392 [2024-07-24 20:08:54.234733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.392 qpair failed and we were unable to recover it. 00:29:06.392 [2024-07-24 20:08:54.235176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.392 [2024-07-24 20:08:54.235183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.392 qpair failed and we were unable to recover it. 00:29:06.392 [2024-07-24 20:08:54.235607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.392 [2024-07-24 20:08:54.235614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.392 qpair failed and we were unable to recover it. 00:29:06.392 [2024-07-24 20:08:54.236005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.392 [2024-07-24 20:08:54.236011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.392 qpair failed and we were unable to recover it. 00:29:06.392 [2024-07-24 20:08:54.236502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.392 [2024-07-24 20:08:54.236530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.392 qpair failed and we were unable to recover it. 00:29:06.392 [2024-07-24 20:08:54.236946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.392 [2024-07-24 20:08:54.236954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.392 qpair failed and we were unable to recover it. 00:29:06.392 [2024-07-24 20:08:54.237511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.392 [2024-07-24 20:08:54.237540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.392 qpair failed and we were unable to recover it. 00:29:06.392 [2024-07-24 20:08:54.237975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.392 [2024-07-24 20:08:54.237984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.392 qpair failed and we were unable to recover it. 00:29:06.392 [2024-07-24 20:08:54.238516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.392 [2024-07-24 20:08:54.238543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.392 qpair failed and we were unable to recover it. 00:29:06.392 [2024-07-24 20:08:54.238989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.392 [2024-07-24 20:08:54.238998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.392 qpair failed and we were unable to recover it. 00:29:06.392 [2024-07-24 20:08:54.239520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.392 [2024-07-24 20:08:54.239548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.392 qpair failed and we were unable to recover it. 00:29:06.392 [2024-07-24 20:08:54.239968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.392 [2024-07-24 20:08:54.239976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.392 qpair failed and we were unable to recover it. 00:29:06.392 [2024-07-24 20:08:54.240516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.392 [2024-07-24 20:08:54.240543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.392 qpair failed and we were unable to recover it. 00:29:06.392 [2024-07-24 20:08:54.240981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.392 [2024-07-24 20:08:54.240990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.392 qpair failed and we were unable to recover it. 00:29:06.392 [2024-07-24 20:08:54.241562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.392 [2024-07-24 20:08:54.241589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.392 qpair failed and we were unable to recover it. 00:29:06.392 [2024-07-24 20:08:54.241915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.392 [2024-07-24 20:08:54.241923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.392 qpair failed and we were unable to recover it. 00:29:06.392 [2024-07-24 20:08:54.242465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.392 [2024-07-24 20:08:54.242492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.392 qpair failed and we were unable to recover it. 00:29:06.392 [2024-07-24 20:08:54.242952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.392 [2024-07-24 20:08:54.242960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.392 qpair failed and we were unable to recover it. 00:29:06.392 [2024-07-24 20:08:54.243480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.392 [2024-07-24 20:08:54.243507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.392 qpair failed and we were unable to recover it. 00:29:06.392 [2024-07-24 20:08:54.243830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.392 [2024-07-24 20:08:54.243839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.392 qpair failed and we were unable to recover it. 00:29:06.392 [2024-07-24 20:08:54.244265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.392 [2024-07-24 20:08:54.244273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.392 qpair failed and we were unable to recover it. 00:29:06.392 [2024-07-24 20:08:54.244679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.392 [2024-07-24 20:08:54.244686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.392 qpair failed and we were unable to recover it. 00:29:06.392 [2024-07-24 20:08:54.245136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.392 [2024-07-24 20:08:54.245143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.392 qpair failed and we were unable to recover it. 00:29:06.393 [2024-07-24 20:08:54.245554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.393 [2024-07-24 20:08:54.245560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.393 qpair failed and we were unable to recover it. 00:29:06.393 [2024-07-24 20:08:54.245875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.393 [2024-07-24 20:08:54.245882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.393 qpair failed and we were unable to recover it. 00:29:06.393 [2024-07-24 20:08:54.246306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.393 [2024-07-24 20:08:54.246313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.393 qpair failed and we were unable to recover it. 00:29:06.393 [2024-07-24 20:08:54.246711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.393 [2024-07-24 20:08:54.246719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.393 qpair failed and we were unable to recover it. 00:29:06.393 [2024-07-24 20:08:54.247140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.393 [2024-07-24 20:08:54.247147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.393 qpair failed and we were unable to recover it. 00:29:06.393 [2024-07-24 20:08:54.247564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.393 [2024-07-24 20:08:54.247571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.393 qpair failed and we were unable to recover it. 00:29:06.393 [2024-07-24 20:08:54.248013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.393 [2024-07-24 20:08:54.248020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.393 qpair failed and we were unable to recover it. 00:29:06.393 [2024-07-24 20:08:54.248433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.393 [2024-07-24 20:08:54.248441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.393 qpair failed and we were unable to recover it. 00:29:06.393 [2024-07-24 20:08:54.248839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.393 [2024-07-24 20:08:54.248846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.393 qpair failed and we were unable to recover it. 00:29:06.393 [2024-07-24 20:08:54.249356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.393 [2024-07-24 20:08:54.249383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.393 qpair failed and we were unable to recover it. 00:29:06.393 [2024-07-24 20:08:54.249708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.393 [2024-07-24 20:08:54.249716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.393 qpair failed and we were unable to recover it. 00:29:06.393 [2024-07-24 20:08:54.250194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.393 [2024-07-24 20:08:54.250209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.393 qpair failed and we were unable to recover it. 00:29:06.393 [2024-07-24 20:08:54.250486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.393 [2024-07-24 20:08:54.250494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.393 qpair failed and we were unable to recover it. 00:29:06.393 [2024-07-24 20:08:54.250920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.393 [2024-07-24 20:08:54.250927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.393 qpair failed and we were unable to recover it. 00:29:06.393 [2024-07-24 20:08:54.251342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.393 [2024-07-24 20:08:54.251349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.393 qpair failed and we were unable to recover it. 00:29:06.393 [2024-07-24 20:08:54.251741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.393 [2024-07-24 20:08:54.251747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.393 qpair failed and we were unable to recover it. 00:29:06.393 [2024-07-24 20:08:54.252204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.393 [2024-07-24 20:08:54.252211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.393 qpair failed and we were unable to recover it. 00:29:06.393 [2024-07-24 20:08:54.252516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.393 [2024-07-24 20:08:54.252523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.393 qpair failed and we were unable to recover it. 00:29:06.393 [2024-07-24 20:08:54.252964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.393 [2024-07-24 20:08:54.252970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.393 qpair failed and we were unable to recover it. 00:29:06.393 [2024-07-24 20:08:54.253488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.393 [2024-07-24 20:08:54.253516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.393 qpair failed and we were unable to recover it. 00:29:06.393 [2024-07-24 20:08:54.253932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.393 [2024-07-24 20:08:54.253940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.393 qpair failed and we were unable to recover it. 00:29:06.393 [2024-07-24 20:08:54.254491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.393 [2024-07-24 20:08:54.254518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.393 qpair failed and we were unable to recover it. 00:29:06.393 [2024-07-24 20:08:54.254961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.393 [2024-07-24 20:08:54.254970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.393 qpair failed and we were unable to recover it. 00:29:06.393 [2024-07-24 20:08:54.255500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.393 [2024-07-24 20:08:54.255528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.393 qpair failed and we were unable to recover it. 00:29:06.393 [2024-07-24 20:08:54.255942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.393 [2024-07-24 20:08:54.255951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.393 qpair failed and we were unable to recover it. 00:29:06.393 [2024-07-24 20:08:54.256471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.393 [2024-07-24 20:08:54.256498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.393 qpair failed and we were unable to recover it. 00:29:06.393 [2024-07-24 20:08:54.256957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.393 [2024-07-24 20:08:54.256966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.393 qpair failed and we were unable to recover it. 00:29:06.393 [2024-07-24 20:08:54.257394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.393 [2024-07-24 20:08:54.257422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.393 qpair failed and we were unable to recover it. 00:29:06.393 [2024-07-24 20:08:54.257897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.393 [2024-07-24 20:08:54.257906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.393 qpair failed and we were unable to recover it. 00:29:06.393 [2024-07-24 20:08:54.258415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.393 [2024-07-24 20:08:54.258442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.393 qpair failed and we were unable to recover it. 00:29:06.393 [2024-07-24 20:08:54.258889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.393 [2024-07-24 20:08:54.258897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.393 qpair failed and we were unable to recover it. 00:29:06.394 [2024-07-24 20:08:54.259301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.394 [2024-07-24 20:08:54.259309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.394 qpair failed and we were unable to recover it. 00:29:06.394 [2024-07-24 20:08:54.259735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.394 [2024-07-24 20:08:54.259742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.394 qpair failed and we were unable to recover it. 00:29:06.394 [2024-07-24 20:08:54.260145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.394 [2024-07-24 20:08:54.260152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.394 qpair failed and we were unable to recover it. 00:29:06.394 [2024-07-24 20:08:54.260639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.394 [2024-07-24 20:08:54.260646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.394 qpair failed and we were unable to recover it. 00:29:06.394 [2024-07-24 20:08:54.261048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.394 [2024-07-24 20:08:54.261055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.394 qpair failed and we were unable to recover it. 00:29:06.394 [2024-07-24 20:08:54.261496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.394 [2024-07-24 20:08:54.261523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.394 qpair failed and we were unable to recover it. 00:29:06.394 [2024-07-24 20:08:54.261943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.394 [2024-07-24 20:08:54.261952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.394 qpair failed and we were unable to recover it. 00:29:06.394 [2024-07-24 20:08:54.262537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.394 [2024-07-24 20:08:54.262564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.394 qpair failed and we were unable to recover it. 00:29:06.394 [2024-07-24 20:08:54.262994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.394 [2024-07-24 20:08:54.263003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.394 qpair failed and we were unable to recover it. 00:29:06.394 [2024-07-24 20:08:54.263222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.394 [2024-07-24 20:08:54.263232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.394 qpair failed and we were unable to recover it. 00:29:06.394 [2024-07-24 20:08:54.263654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.394 [2024-07-24 20:08:54.263661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.394 qpair failed and we were unable to recover it. 00:29:06.394 [2024-07-24 20:08:54.264068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.394 [2024-07-24 20:08:54.264075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.394 qpair failed and we were unable to recover it. 00:29:06.394 [2024-07-24 20:08:54.264575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.394 [2024-07-24 20:08:54.264603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.394 qpair failed and we were unable to recover it. 00:29:06.394 [2024-07-24 20:08:54.265018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.394 [2024-07-24 20:08:54.265026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.394 qpair failed and we were unable to recover it. 00:29:06.394 [2024-07-24 20:08:54.265530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.394 [2024-07-24 20:08:54.265558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.394 qpair failed and we were unable to recover it. 00:29:06.394 [2024-07-24 20:08:54.266010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.394 [2024-07-24 20:08:54.266019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.394 qpair failed and we were unable to recover it. 00:29:06.394 [2024-07-24 20:08:54.266514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.394 [2024-07-24 20:08:54.266542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.394 qpair failed and we were unable to recover it. 00:29:06.394 [2024-07-24 20:08:54.266960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.394 [2024-07-24 20:08:54.266968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.394 qpair failed and we were unable to recover it. 00:29:06.394 [2024-07-24 20:08:54.267487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.394 [2024-07-24 20:08:54.267515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.394 qpair failed and we were unable to recover it. 00:29:06.394 [2024-07-24 20:08:54.267843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.394 [2024-07-24 20:08:54.267852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.394 qpair failed and we were unable to recover it. 00:29:06.394 [2024-07-24 20:08:54.268361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.394 [2024-07-24 20:08:54.268392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.394 qpair failed and we were unable to recover it. 00:29:06.394 [2024-07-24 20:08:54.268798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.394 [2024-07-24 20:08:54.268806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.394 qpair failed and we were unable to recover it. 00:29:06.394 [2024-07-24 20:08:54.269016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.394 [2024-07-24 20:08:54.269026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.394 qpair failed and we were unable to recover it. 00:29:06.394 [2024-07-24 20:08:54.269274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.394 [2024-07-24 20:08:54.269282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.394 qpair failed and we were unable to recover it. 00:29:06.394 [2024-07-24 20:08:54.269719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.394 [2024-07-24 20:08:54.269727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.394 qpair failed and we were unable to recover it. 00:29:06.394 [2024-07-24 20:08:54.270171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.394 [2024-07-24 20:08:54.270179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.394 qpair failed and we were unable to recover it. 00:29:06.394 [2024-07-24 20:08:54.270515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.394 [2024-07-24 20:08:54.270522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.394 qpair failed and we were unable to recover it. 00:29:06.394 [2024-07-24 20:08:54.270965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.394 [2024-07-24 20:08:54.270972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.394 qpair failed and we were unable to recover it. 00:29:06.394 [2024-07-24 20:08:54.271364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.394 [2024-07-24 20:08:54.271371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.394 qpair failed and we were unable to recover it. 00:29:06.394 [2024-07-24 20:08:54.271693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.394 [2024-07-24 20:08:54.271700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.394 qpair failed and we were unable to recover it. 00:29:06.394 [2024-07-24 20:08:54.272012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.394 [2024-07-24 20:08:54.272019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.394 qpair failed and we were unable to recover it. 00:29:06.394 [2024-07-24 20:08:54.272452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.394 [2024-07-24 20:08:54.272459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.394 qpair failed and we were unable to recover it. 00:29:06.394 [2024-07-24 20:08:54.272857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.394 [2024-07-24 20:08:54.272863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.394 qpair failed and we were unable to recover it. 00:29:06.394 [2024-07-24 20:08:54.273268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.394 [2024-07-24 20:08:54.273275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.394 qpair failed and we were unable to recover it. 00:29:06.394 [2024-07-24 20:08:54.273644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.394 [2024-07-24 20:08:54.273652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.394 qpair failed and we were unable to recover it. 00:29:06.394 [2024-07-24 20:08:54.274061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.394 [2024-07-24 20:08:54.274067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.394 qpair failed and we were unable to recover it. 00:29:06.394 [2024-07-24 20:08:54.274511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.394 [2024-07-24 20:08:54.274519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.394 qpair failed and we were unable to recover it. 00:29:06.394 [2024-07-24 20:08:54.274926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.394 [2024-07-24 20:08:54.274933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.395 qpair failed and we were unable to recover it. 00:29:06.395 [2024-07-24 20:08:54.275472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.395 [2024-07-24 20:08:54.275499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.395 qpair failed and we were unable to recover it. 00:29:06.395 [2024-07-24 20:08:54.275898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.395 [2024-07-24 20:08:54.275906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.395 qpair failed and we were unable to recover it. 00:29:06.395 [2024-07-24 20:08:54.276407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.395 [2024-07-24 20:08:54.276435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.395 qpair failed and we were unable to recover it. 00:29:06.395 [2024-07-24 20:08:54.276850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.395 [2024-07-24 20:08:54.276859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.395 qpair failed and we were unable to recover it. 00:29:06.395 [2024-07-24 20:08:54.277265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.395 [2024-07-24 20:08:54.277272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.395 qpair failed and we were unable to recover it. 00:29:06.395 [2024-07-24 20:08:54.277678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.395 [2024-07-24 20:08:54.277685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.395 qpair failed and we were unable to recover it. 00:29:06.395 [2024-07-24 20:08:54.278017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.395 [2024-07-24 20:08:54.278025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.395 qpair failed and we were unable to recover it. 00:29:06.395 [2024-07-24 20:08:54.278450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.395 [2024-07-24 20:08:54.278457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.395 qpair failed and we were unable to recover it. 00:29:06.395 [2024-07-24 20:08:54.278853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.395 [2024-07-24 20:08:54.278859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.395 qpair failed and we were unable to recover it. 00:29:06.395 [2024-07-24 20:08:54.279261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.395 [2024-07-24 20:08:54.279268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.395 qpair failed and we were unable to recover it. 00:29:06.395 [2024-07-24 20:08:54.279713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.395 [2024-07-24 20:08:54.279720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.395 qpair failed and we were unable to recover it. 00:29:06.395 [2024-07-24 20:08:54.280144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.395 [2024-07-24 20:08:54.280150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.395 qpair failed and we were unable to recover it. 00:29:06.395 [2024-07-24 20:08:54.280596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.395 [2024-07-24 20:08:54.280603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.395 qpair failed and we were unable to recover it. 00:29:06.395 [2024-07-24 20:08:54.281023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.395 [2024-07-24 20:08:54.281030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.395 qpair failed and we were unable to recover it. 00:29:06.395 [2024-07-24 20:08:54.281527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.395 [2024-07-24 20:08:54.281554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.395 qpair failed and we were unable to recover it. 00:29:06.395 [2024-07-24 20:08:54.281973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.395 [2024-07-24 20:08:54.281982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.395 qpair failed and we were unable to recover it. 00:29:06.395 [2024-07-24 20:08:54.282509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.395 [2024-07-24 20:08:54.282536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.395 qpair failed and we were unable to recover it. 00:29:06.395 [2024-07-24 20:08:54.282985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.395 [2024-07-24 20:08:54.282993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.395 qpair failed and we were unable to recover it. 00:29:06.395 [2024-07-24 20:08:54.283522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.395 [2024-07-24 20:08:54.283549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.395 qpair failed and we were unable to recover it. 00:29:06.395 [2024-07-24 20:08:54.283966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.395 [2024-07-24 20:08:54.283974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.395 qpair failed and we were unable to recover it. 00:29:06.395 [2024-07-24 20:08:54.284496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.395 [2024-07-24 20:08:54.284524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.395 qpair failed and we were unable to recover it. 00:29:06.395 [2024-07-24 20:08:54.284972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.395 [2024-07-24 20:08:54.284981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.395 qpair failed and we were unable to recover it. 00:29:06.395 [2024-07-24 20:08:54.285498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.395 [2024-07-24 20:08:54.285528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.395 qpair failed and we were unable to recover it. 00:29:06.395 [2024-07-24 20:08:54.285945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.395 [2024-07-24 20:08:54.285954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.395 qpair failed and we were unable to recover it. 00:29:06.395 [2024-07-24 20:08:54.286473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.395 [2024-07-24 20:08:54.286500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.395 qpair failed and we were unable to recover it. 00:29:06.395 [2024-07-24 20:08:54.286949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.395 [2024-07-24 20:08:54.286957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.395 qpair failed and we were unable to recover it. 00:29:06.395 [2024-07-24 20:08:54.287404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.395 [2024-07-24 20:08:54.287438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.395 qpair failed and we were unable to recover it. 00:29:06.395 [2024-07-24 20:08:54.287868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.395 [2024-07-24 20:08:54.287877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.395 qpair failed and we were unable to recover it. 00:29:06.395 [2024-07-24 20:08:54.288278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.395 [2024-07-24 20:08:54.288285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.395 qpair failed and we were unable to recover it. 00:29:06.395 [2024-07-24 20:08:54.288642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.395 [2024-07-24 20:08:54.288649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.395 qpair failed and we were unable to recover it. 00:29:06.395 [2024-07-24 20:08:54.288993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.395 [2024-07-24 20:08:54.289000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.395 qpair failed and we were unable to recover it. 00:29:06.395 [2024-07-24 20:08:54.289432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.395 [2024-07-24 20:08:54.289438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.395 qpair failed and we were unable to recover it. 00:29:06.395 [2024-07-24 20:08:54.289839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.395 [2024-07-24 20:08:54.289845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.395 qpair failed and we were unable to recover it. 00:29:06.395 [2024-07-24 20:08:54.290246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.395 [2024-07-24 20:08:54.290253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.395 qpair failed and we were unable to recover it. 00:29:06.395 [2024-07-24 20:08:54.290704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.395 [2024-07-24 20:08:54.290711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.395 qpair failed and we were unable to recover it. 00:29:06.395 [2024-07-24 20:08:54.291024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.395 [2024-07-24 20:08:54.291030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.395 qpair failed and we were unable to recover it. 00:29:06.395 [2024-07-24 20:08:54.291448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.395 [2024-07-24 20:08:54.291456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.395 qpair failed and we were unable to recover it. 00:29:06.395 [2024-07-24 20:08:54.291846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.396 [2024-07-24 20:08:54.291853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.396 qpair failed and we were unable to recover it. 00:29:06.396 [2024-07-24 20:08:54.292167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.396 [2024-07-24 20:08:54.292174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.396 qpair failed and we were unable to recover it. 00:29:06.396 [2024-07-24 20:08:54.292521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.396 [2024-07-24 20:08:54.292528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.396 qpair failed and we were unable to recover it. 00:29:06.396 [2024-07-24 20:08:54.292946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.396 [2024-07-24 20:08:54.292954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.396 qpair failed and we were unable to recover it. 00:29:06.396 [2024-07-24 20:08:54.293487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.396 [2024-07-24 20:08:54.293515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.396 qpair failed and we were unable to recover it. 00:29:06.396 [2024-07-24 20:08:54.293846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.396 [2024-07-24 20:08:54.293854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.396 qpair failed and we were unable to recover it. 00:29:06.396 [2024-07-24 20:08:54.294281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.396 [2024-07-24 20:08:54.294288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.396 qpair failed and we were unable to recover it. 00:29:06.396 [2024-07-24 20:08:54.294730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.396 [2024-07-24 20:08:54.294736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.396 qpair failed and we were unable to recover it. 00:29:06.396 [2024-07-24 20:08:54.295155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.396 [2024-07-24 20:08:54.295161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.396 qpair failed and we were unable to recover it. 00:29:06.396 [2024-07-24 20:08:54.295476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.396 [2024-07-24 20:08:54.295483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.396 qpair failed and we were unable to recover it. 00:29:06.396 [2024-07-24 20:08:54.295902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.396 [2024-07-24 20:08:54.295909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.396 qpair failed and we were unable to recover it. 00:29:06.396 [2024-07-24 20:08:54.296324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.396 [2024-07-24 20:08:54.296331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.396 qpair failed and we were unable to recover it. 00:29:06.396 [2024-07-24 20:08:54.296756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.396 [2024-07-24 20:08:54.296763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.396 qpair failed and we were unable to recover it. 00:29:06.396 [2024-07-24 20:08:54.297194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.396 [2024-07-24 20:08:54.297203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.396 qpair failed and we were unable to recover it. 00:29:06.396 [2024-07-24 20:08:54.297657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.396 [2024-07-24 20:08:54.297665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.396 qpair failed and we were unable to recover it. 00:29:06.396 [2024-07-24 20:08:54.298101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.396 [2024-07-24 20:08:54.298109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.396 qpair failed and we were unable to recover it. 00:29:06.396 [2024-07-24 20:08:54.298539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.396 [2024-07-24 20:08:54.298546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.396 qpair failed and we were unable to recover it. 00:29:06.396 [2024-07-24 20:08:54.298968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.396 [2024-07-24 20:08:54.298975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.396 qpair failed and we were unable to recover it. 00:29:06.396 [2024-07-24 20:08:54.299438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.396 [2024-07-24 20:08:54.299466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.396 qpair failed and we were unable to recover it. 00:29:06.396 [2024-07-24 20:08:54.299893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.396 [2024-07-24 20:08:54.299901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.396 qpair failed and we were unable to recover it. 00:29:06.396 [2024-07-24 20:08:54.300434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.396 [2024-07-24 20:08:54.300461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.396 qpair failed and we were unable to recover it. 00:29:06.396 [2024-07-24 20:08:54.300879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.396 [2024-07-24 20:08:54.300887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.396 qpair failed and we were unable to recover it. 00:29:06.396 [2024-07-24 20:08:54.301226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.396 [2024-07-24 20:08:54.301233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.396 qpair failed and we were unable to recover it. 00:29:06.396 [2024-07-24 20:08:54.301638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.396 [2024-07-24 20:08:54.301645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.396 qpair failed and we were unable to recover it. 00:29:06.396 [2024-07-24 20:08:54.302099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.396 [2024-07-24 20:08:54.302106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.396 qpair failed and we were unable to recover it. 00:29:06.396 [2024-07-24 20:08:54.302537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.396 [2024-07-24 20:08:54.302549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.396 qpair failed and we were unable to recover it. 00:29:06.396 [2024-07-24 20:08:54.302950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.396 [2024-07-24 20:08:54.302957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.396 qpair failed and we were unable to recover it. 00:29:06.396 [2024-07-24 20:08:54.303384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.396 [2024-07-24 20:08:54.303392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.396 qpair failed and we were unable to recover it. 00:29:06.396 [2024-07-24 20:08:54.303797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.396 [2024-07-24 20:08:54.303804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.396 qpair failed and we were unable to recover it. 00:29:06.396 [2024-07-24 20:08:54.304209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.396 [2024-07-24 20:08:54.304216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.396 qpair failed and we were unable to recover it. 00:29:06.396 [2024-07-24 20:08:54.304643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.396 [2024-07-24 20:08:54.304649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.396 qpair failed and we were unable to recover it. 00:29:06.396 [2024-07-24 20:08:54.305116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.396 [2024-07-24 20:08:54.305122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.396 qpair failed and we were unable to recover it. 00:29:06.396 [2024-07-24 20:08:54.305535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.396 [2024-07-24 20:08:54.305542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.396 qpair failed and we were unable to recover it. 00:29:06.396 [2024-07-24 20:08:54.305972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.396 [2024-07-24 20:08:54.305979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.396 qpair failed and we were unable to recover it. 00:29:06.396 [2024-07-24 20:08:54.306404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.396 [2024-07-24 20:08:54.306411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.396 qpair failed and we were unable to recover it. 00:29:06.396 [2024-07-24 20:08:54.306726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.396 [2024-07-24 20:08:54.306733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.396 qpair failed and we were unable to recover it. 00:29:06.396 [2024-07-24 20:08:54.306939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.396 [2024-07-24 20:08:54.306949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.396 qpair failed and we were unable to recover it. 00:29:06.396 [2024-07-24 20:08:54.307425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.396 [2024-07-24 20:08:54.307432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.396 qpair failed and we were unable to recover it. 00:29:06.396 [2024-07-24 20:08:54.307846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.397 [2024-07-24 20:08:54.307853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.397 qpair failed and we were unable to recover it. 00:29:06.397 [2024-07-24 20:08:54.308280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.397 [2024-07-24 20:08:54.308287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.397 qpair failed and we were unable to recover it. 00:29:06.397 [2024-07-24 20:08:54.308787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.397 [2024-07-24 20:08:54.308793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.397 qpair failed and we were unable to recover it. 00:29:06.397 [2024-07-24 20:08:54.309191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.397 [2024-07-24 20:08:54.309197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.397 qpair failed and we were unable to recover it. 00:29:06.397 [2024-07-24 20:08:54.309626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.397 [2024-07-24 20:08:54.309633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.397 qpair failed and we were unable to recover it. 00:29:06.397 [2024-07-24 20:08:54.310042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.397 [2024-07-24 20:08:54.310048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.397 qpair failed and we were unable to recover it. 00:29:06.397 [2024-07-24 20:08:54.310492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.397 [2024-07-24 20:08:54.310520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.397 qpair failed and we were unable to recover it. 00:29:06.397 [2024-07-24 20:08:54.310955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.397 [2024-07-24 20:08:54.310963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.397 qpair failed and we were unable to recover it. 00:29:06.397 [2024-07-24 20:08:54.311416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.397 [2024-07-24 20:08:54.311444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.397 qpair failed and we were unable to recover it. 00:29:06.397 [2024-07-24 20:08:54.311880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.397 [2024-07-24 20:08:54.311888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.397 qpair failed and we were unable to recover it. 00:29:06.397 [2024-07-24 20:08:54.312405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.397 [2024-07-24 20:08:54.312433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.397 qpair failed and we were unable to recover it. 00:29:06.397 [2024-07-24 20:08:54.312643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.397 [2024-07-24 20:08:54.312653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.397 qpair failed and we were unable to recover it. 00:29:06.397 [2024-07-24 20:08:54.313100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.397 [2024-07-24 20:08:54.313107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.397 qpair failed and we were unable to recover it. 00:29:06.397 [2024-07-24 20:08:54.313397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.397 [2024-07-24 20:08:54.313404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.397 qpair failed and we were unable to recover it. 00:29:06.397 [2024-07-24 20:08:54.313850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.397 [2024-07-24 20:08:54.313860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.397 qpair failed and we were unable to recover it. 00:29:06.397 [2024-07-24 20:08:54.314166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.397 [2024-07-24 20:08:54.314173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.397 qpair failed and we were unable to recover it. 00:29:06.397 [2024-07-24 20:08:54.314656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.397 [2024-07-24 20:08:54.314663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.397 qpair failed and we were unable to recover it. 00:29:06.397 [2024-07-24 20:08:54.315112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.397 [2024-07-24 20:08:54.315118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.397 qpair failed and we were unable to recover it. 00:29:06.397 [2024-07-24 20:08:54.315639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.397 [2024-07-24 20:08:54.315646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.397 qpair failed and we were unable to recover it. 00:29:06.397 [2024-07-24 20:08:54.316050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.397 [2024-07-24 20:08:54.316056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.397 qpair failed and we were unable to recover it. 00:29:06.397 [2024-07-24 20:08:54.316563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.397 [2024-07-24 20:08:54.316590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.397 qpair failed and we were unable to recover it. 00:29:06.397 [2024-07-24 20:08:54.317019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.397 [2024-07-24 20:08:54.317027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.397 qpair failed and we were unable to recover it. 00:29:06.397 [2024-07-24 20:08:54.317547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.397 [2024-07-24 20:08:54.317575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.397 qpair failed and we were unable to recover it. 00:29:06.397 [2024-07-24 20:08:54.318089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.397 [2024-07-24 20:08:54.318098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.397 qpair failed and we were unable to recover it. 00:29:06.397 [2024-07-24 20:08:54.318426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.397 [2024-07-24 20:08:54.318434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.397 qpair failed and we were unable to recover it. 00:29:06.397 [2024-07-24 20:08:54.318865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.397 [2024-07-24 20:08:54.318872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.397 qpair failed and we were unable to recover it. 00:29:06.397 [2024-07-24 20:08:54.319410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.397 [2024-07-24 20:08:54.319437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.397 qpair failed and we were unable to recover it. 00:29:06.397 [2024-07-24 20:08:54.319755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.397 [2024-07-24 20:08:54.319763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.397 qpair failed and we were unable to recover it. 00:29:06.397 [2024-07-24 20:08:54.320205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.397 [2024-07-24 20:08:54.320213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.397 qpair failed and we were unable to recover it. 00:29:06.397 [2024-07-24 20:08:54.320638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.397 [2024-07-24 20:08:54.320645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.397 qpair failed and we were unable to recover it. 00:29:06.674 [2024-07-24 20:08:54.321141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.674 [2024-07-24 20:08:54.321148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.674 qpair failed and we were unable to recover it. 00:29:06.674 [2024-07-24 20:08:54.321617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.674 [2024-07-24 20:08:54.321625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.674 qpair failed and we were unable to recover it. 00:29:06.674 [2024-07-24 20:08:54.322103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.674 [2024-07-24 20:08:54.322110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.674 qpair failed and we were unable to recover it. 00:29:06.674 [2024-07-24 20:08:54.322559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.674 [2024-07-24 20:08:54.322587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.674 qpair failed and we were unable to recover it. 00:29:06.674 [2024-07-24 20:08:54.323074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.674 [2024-07-24 20:08:54.323082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.674 qpair failed and we were unable to recover it. 00:29:06.674 [2024-07-24 20:08:54.323640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.674 [2024-07-24 20:08:54.323667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.674 qpair failed and we were unable to recover it. 00:29:06.674 [2024-07-24 20:08:54.324100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.674 [2024-07-24 20:08:54.324109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.674 qpair failed and we were unable to recover it. 00:29:06.674 [2024-07-24 20:08:54.324400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.674 [2024-07-24 20:08:54.324408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.674 qpair failed and we were unable to recover it. 00:29:06.674 [2024-07-24 20:08:54.324718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.674 [2024-07-24 20:08:54.324726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.674 qpair failed and we were unable to recover it. 00:29:06.674 [2024-07-24 20:08:54.325154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.674 [2024-07-24 20:08:54.325161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.674 qpair failed and we were unable to recover it. 00:29:06.675 [2024-07-24 20:08:54.325454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.675 [2024-07-24 20:08:54.325462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.675 qpair failed and we were unable to recover it. 00:29:06.675 [2024-07-24 20:08:54.325762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.675 [2024-07-24 20:08:54.325768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.675 qpair failed and we were unable to recover it. 00:29:06.675 [2024-07-24 20:08:54.326198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.675 [2024-07-24 20:08:54.326209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.675 qpair failed and we were unable to recover it. 00:29:06.675 [2024-07-24 20:08:54.326543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.675 [2024-07-24 20:08:54.326550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.675 qpair failed and we were unable to recover it. 00:29:06.675 [2024-07-24 20:08:54.326861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.675 [2024-07-24 20:08:54.326868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.675 qpair failed and we were unable to recover it. 00:29:06.675 [2024-07-24 20:08:54.327288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.675 [2024-07-24 20:08:54.327295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.675 qpair failed and we were unable to recover it. 00:29:06.675 [2024-07-24 20:08:54.327729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.675 [2024-07-24 20:08:54.327735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.675 qpair failed and we were unable to recover it. 00:29:06.675 [2024-07-24 20:08:54.328169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.675 [2024-07-24 20:08:54.328177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.675 qpair failed and we were unable to recover it. 00:29:06.675 [2024-07-24 20:08:54.328616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.675 [2024-07-24 20:08:54.328623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.675 qpair failed and we were unable to recover it. 00:29:06.675 [2024-07-24 20:08:54.329039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.675 [2024-07-24 20:08:54.329045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.675 qpair failed and we were unable to recover it. 00:29:06.675 [2024-07-24 20:08:54.329565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.675 [2024-07-24 20:08:54.329593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.675 qpair failed and we were unable to recover it. 00:29:06.675 [2024-07-24 20:08:54.330022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.675 [2024-07-24 20:08:54.330030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.675 qpair failed and we were unable to recover it. 00:29:06.675 [2024-07-24 20:08:54.330542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.675 [2024-07-24 20:08:54.330569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.675 qpair failed and we were unable to recover it. 00:29:06.675 [2024-07-24 20:08:54.330906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.675 [2024-07-24 20:08:54.330914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.675 qpair failed and we were unable to recover it. 00:29:06.675 [2024-07-24 20:08:54.331464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.675 [2024-07-24 20:08:54.331496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.675 qpair failed and we were unable to recover it. 00:29:06.675 [2024-07-24 20:08:54.331827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.675 [2024-07-24 20:08:54.331836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.675 qpair failed and we were unable to recover it. 00:29:06.675 [2024-07-24 20:08:54.332153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.675 [2024-07-24 20:08:54.332159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.675 qpair failed and we were unable to recover it. 00:29:06.675 [2024-07-24 20:08:54.332622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.675 [2024-07-24 20:08:54.332629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.675 qpair failed and we were unable to recover it. 00:29:06.675 [2024-07-24 20:08:54.333046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.675 [2024-07-24 20:08:54.333053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.675 qpair failed and we were unable to recover it. 00:29:06.675 [2024-07-24 20:08:54.333584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.675 [2024-07-24 20:08:54.333612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.675 qpair failed and we were unable to recover it. 00:29:06.675 [2024-07-24 20:08:54.334044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.675 [2024-07-24 20:08:54.334053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.675 qpair failed and we were unable to recover it. 00:29:06.675 [2024-07-24 20:08:54.334582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.675 [2024-07-24 20:08:54.334609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.675 qpair failed and we were unable to recover it. 00:29:06.675 [2024-07-24 20:08:54.335022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.675 [2024-07-24 20:08:54.335030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.675 qpair failed and we were unable to recover it. 00:29:06.675 [2024-07-24 20:08:54.335529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.675 [2024-07-24 20:08:54.335556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.675 qpair failed and we were unable to recover it. 00:29:06.675 [2024-07-24 20:08:54.335995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.675 [2024-07-24 20:08:54.336003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.675 qpair failed and we were unable to recover it. 00:29:06.675 [2024-07-24 20:08:54.336385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.675 [2024-07-24 20:08:54.336412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.675 qpair failed and we were unable to recover it. 00:29:06.675 [2024-07-24 20:08:54.336869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.675 [2024-07-24 20:08:54.336878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.675 qpair failed and we were unable to recover it. 00:29:06.675 [2024-07-24 20:08:54.337372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.675 [2024-07-24 20:08:54.337400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.675 qpair failed and we were unable to recover it. 00:29:06.675 [2024-07-24 20:08:54.337732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.675 [2024-07-24 20:08:54.337740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.675 qpair failed and we were unable to recover it. 00:29:06.675 [2024-07-24 20:08:54.338183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.675 [2024-07-24 20:08:54.338190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.675 qpair failed and we were unable to recover it. 00:29:06.675 [2024-07-24 20:08:54.338618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.675 [2024-07-24 20:08:54.338625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.675 qpair failed and we were unable to recover it. 00:29:06.675 [2024-07-24 20:08:54.339082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.675 [2024-07-24 20:08:54.339089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.675 qpair failed and we were unable to recover it. 00:29:06.675 [2024-07-24 20:08:54.339500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.675 [2024-07-24 20:08:54.339511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.675 qpair failed and we were unable to recover it. 00:29:06.675 [2024-07-24 20:08:54.339974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.675 [2024-07-24 20:08:54.339981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.675 qpair failed and we were unable to recover it. 00:29:06.675 [2024-07-24 20:08:54.340417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.675 [2024-07-24 20:08:54.340444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.675 qpair failed and we were unable to recover it. 00:29:06.675 [2024-07-24 20:08:54.340883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.675 [2024-07-24 20:08:54.340891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.675 qpair failed and we were unable to recover it. 00:29:06.675 [2024-07-24 20:08:54.341298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.675 [2024-07-24 20:08:54.341306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.675 qpair failed and we were unable to recover it. 00:29:06.676 [2024-07-24 20:08:54.341735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.676 [2024-07-24 20:08:54.341742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.676 qpair failed and we were unable to recover it. 00:29:06.676 [2024-07-24 20:08:54.342160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.676 [2024-07-24 20:08:54.342166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.676 qpair failed and we were unable to recover it. 00:29:06.676 [2024-07-24 20:08:54.342369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.676 [2024-07-24 20:08:54.342377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.676 qpair failed and we were unable to recover it. 00:29:06.676 [2024-07-24 20:08:54.342817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.676 [2024-07-24 20:08:54.342824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.676 qpair failed and we were unable to recover it. 00:29:06.676 [2024-07-24 20:08:54.343228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.676 [2024-07-24 20:08:54.343235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.676 qpair failed and we were unable to recover it. 00:29:06.676 [2024-07-24 20:08:54.343642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.676 [2024-07-24 20:08:54.343649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.676 qpair failed and we were unable to recover it. 00:29:06.676 [2024-07-24 20:08:54.344120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.676 [2024-07-24 20:08:54.344127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.676 qpair failed and we were unable to recover it. 00:29:06.676 [2024-07-24 20:08:54.344566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.676 [2024-07-24 20:08:54.344573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.676 qpair failed and we were unable to recover it. 00:29:06.676 [2024-07-24 20:08:54.344980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.676 [2024-07-24 20:08:54.344986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.676 qpair failed and we were unable to recover it. 00:29:06.676 [2024-07-24 20:08:54.345475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.676 [2024-07-24 20:08:54.345483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.676 qpair failed and we were unable to recover it. 00:29:06.676 [2024-07-24 20:08:54.345950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.676 [2024-07-24 20:08:54.345956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.676 qpair failed and we were unable to recover it. 00:29:06.676 [2024-07-24 20:08:54.346439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.676 [2024-07-24 20:08:54.346467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.676 qpair failed and we were unable to recover it. 00:29:06.676 [2024-07-24 20:08:54.346879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.676 [2024-07-24 20:08:54.346887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.676 qpair failed and we were unable to recover it. 00:29:06.676 [2024-07-24 20:08:54.347296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.676 [2024-07-24 20:08:54.347304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.676 qpair failed and we were unable to recover it. 00:29:06.676 [2024-07-24 20:08:54.347726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.676 [2024-07-24 20:08:54.347733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.676 qpair failed and we were unable to recover it. 00:29:06.676 [2024-07-24 20:08:54.348164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.676 [2024-07-24 20:08:54.348171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.676 qpair failed and we were unable to recover it. 00:29:06.676 [2024-07-24 20:08:54.348604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.676 [2024-07-24 20:08:54.348611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.676 qpair failed and we were unable to recover it. 00:29:06.676 [2024-07-24 20:08:54.349016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.676 [2024-07-24 20:08:54.349027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.676 qpair failed and we were unable to recover it. 00:29:06.676 [2024-07-24 20:08:54.349548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.676 [2024-07-24 20:08:54.349575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.676 qpair failed and we were unable to recover it. 00:29:06.676 [2024-07-24 20:08:54.349995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.676 [2024-07-24 20:08:54.350004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.676 qpair failed and we were unable to recover it. 00:29:06.676 [2024-07-24 20:08:54.350524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.676 [2024-07-24 20:08:54.350552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.676 qpair failed and we were unable to recover it. 00:29:06.676 [2024-07-24 20:08:54.350996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.676 [2024-07-24 20:08:54.351004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.676 qpair failed and we were unable to recover it. 00:29:06.676 [2024-07-24 20:08:54.351520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.676 [2024-07-24 20:08:54.351547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.676 qpair failed and we were unable to recover it. 00:29:06.676 [2024-07-24 20:08:54.352033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.676 [2024-07-24 20:08:54.352042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.676 qpair failed and we were unable to recover it. 00:29:06.676 [2024-07-24 20:08:54.352625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.676 [2024-07-24 20:08:54.352653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.676 qpair failed and we were unable to recover it. 00:29:06.676 [2024-07-24 20:08:54.353099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.676 [2024-07-24 20:08:54.353107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.676 qpair failed and we were unable to recover it. 00:29:06.676 [2024-07-24 20:08:54.353628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.676 [2024-07-24 20:08:54.353655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.676 qpair failed and we were unable to recover it. 00:29:06.676 [2024-07-24 20:08:54.354071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.676 [2024-07-24 20:08:54.354080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.676 qpair failed and we were unable to recover it. 00:29:06.676 [2024-07-24 20:08:54.354590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.676 [2024-07-24 20:08:54.354617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.676 qpair failed and we were unable to recover it. 00:29:06.676 [2024-07-24 20:08:54.355032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.676 [2024-07-24 20:08:54.355042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.676 qpair failed and we were unable to recover it. 00:29:06.676 [2024-07-24 20:08:54.355564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.676 [2024-07-24 20:08:54.355592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.676 qpair failed and we were unable to recover it. 00:29:06.676 [2024-07-24 20:08:54.355905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.676 [2024-07-24 20:08:54.355914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.676 qpair failed and we were unable to recover it. 00:29:06.677 [2024-07-24 20:08:54.356461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.677 [2024-07-24 20:08:54.356489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.677 qpair failed and we were unable to recover it. 00:29:06.677 [2024-07-24 20:08:54.356904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.677 [2024-07-24 20:08:54.356913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.677 qpair failed and we were unable to recover it. 00:29:06.677 [2024-07-24 20:08:54.357428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.677 [2024-07-24 20:08:54.357455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.677 qpair failed and we were unable to recover it. 00:29:06.677 [2024-07-24 20:08:54.357873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.677 [2024-07-24 20:08:54.357882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.677 qpair failed and we were unable to recover it. 00:29:06.677 [2024-07-24 20:08:54.358330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.677 [2024-07-24 20:08:54.358338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.677 qpair failed and we were unable to recover it. 00:29:06.677 [2024-07-24 20:08:54.358774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.677 [2024-07-24 20:08:54.358780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.677 qpair failed and we were unable to recover it. 00:29:06.677 [2024-07-24 20:08:54.359263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.677 [2024-07-24 20:08:54.359270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.677 qpair failed and we were unable to recover it. 00:29:06.677 [2024-07-24 20:08:54.359691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.677 [2024-07-24 20:08:54.359698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.677 qpair failed and we were unable to recover it. 00:29:06.677 [2024-07-24 20:08:54.360099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.677 [2024-07-24 20:08:54.360106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.677 qpair failed and we were unable to recover it. 00:29:06.677 [2024-07-24 20:08:54.360484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.677 [2024-07-24 20:08:54.360491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.677 qpair failed and we were unable to recover it. 00:29:06.677 [2024-07-24 20:08:54.360916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.677 [2024-07-24 20:08:54.360922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.677 qpair failed and we were unable to recover it. 00:29:06.677 [2024-07-24 20:08:54.361476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.677 [2024-07-24 20:08:54.361484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.677 qpair failed and we were unable to recover it. 00:29:06.677 [2024-07-24 20:08:54.361912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.677 [2024-07-24 20:08:54.361919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.677 qpair failed and we were unable to recover it. 00:29:06.677 [2024-07-24 20:08:54.362320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.677 [2024-07-24 20:08:54.362328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.677 qpair failed and we were unable to recover it. 00:29:06.677 [2024-07-24 20:08:54.362535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.677 [2024-07-24 20:08:54.362546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.677 qpair failed and we were unable to recover it. 00:29:06.677 [2024-07-24 20:08:54.362966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.677 [2024-07-24 20:08:54.362974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.677 qpair failed and we were unable to recover it. 00:29:06.677 [2024-07-24 20:08:54.363384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.677 [2024-07-24 20:08:54.363392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.677 qpair failed and we were unable to recover it. 00:29:06.677 [2024-07-24 20:08:54.363802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.677 [2024-07-24 20:08:54.363809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.677 qpair failed and we were unable to recover it. 00:29:06.677 [2024-07-24 20:08:54.364209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.677 [2024-07-24 20:08:54.364216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.677 qpair failed and we were unable to recover it. 00:29:06.677 [2024-07-24 20:08:54.364407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.677 [2024-07-24 20:08:54.364415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.677 qpair failed and we were unable to recover it. 00:29:06.677 [2024-07-24 20:08:54.364728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.677 [2024-07-24 20:08:54.364735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.677 qpair failed and we were unable to recover it. 00:29:06.677 [2024-07-24 20:08:54.365169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.677 [2024-07-24 20:08:54.365176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.677 qpair failed and we were unable to recover it. 00:29:06.677 [2024-07-24 20:08:54.365600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.677 [2024-07-24 20:08:54.365608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.677 qpair failed and we were unable to recover it. 00:29:06.677 [2024-07-24 20:08:54.365818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.677 [2024-07-24 20:08:54.365826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.677 qpair failed and we were unable to recover it. 00:29:06.677 [2024-07-24 20:08:54.366212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.677 [2024-07-24 20:08:54.366219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.677 qpair failed and we were unable to recover it. 00:29:06.677 [2024-07-24 20:08:54.366607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.677 [2024-07-24 20:08:54.366616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.677 qpair failed and we were unable to recover it. 00:29:06.677 [2024-07-24 20:08:54.367023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.677 [2024-07-24 20:08:54.367030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.677 qpair failed and we were unable to recover it. 00:29:06.677 [2024-07-24 20:08:54.367237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.677 [2024-07-24 20:08:54.367246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.677 qpair failed and we were unable to recover it. 00:29:06.677 [2024-07-24 20:08:54.367650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.677 [2024-07-24 20:08:54.367657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.677 qpair failed and we were unable to recover it. 00:29:06.677 [2024-07-24 20:08:54.368059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.677 [2024-07-24 20:08:54.368066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.677 qpair failed and we were unable to recover it. 00:29:06.677 [2024-07-24 20:08:54.368464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.677 [2024-07-24 20:08:54.368471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.677 qpair failed and we were unable to recover it. 00:29:06.677 [2024-07-24 20:08:54.368869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.677 [2024-07-24 20:08:54.368876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.677 qpair failed and we were unable to recover it. 00:29:06.677 [2024-07-24 20:08:54.369391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.677 [2024-07-24 20:08:54.369419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.677 qpair failed and we were unable to recover it. 00:29:06.677 [2024-07-24 20:08:54.369837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.677 [2024-07-24 20:08:54.369845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.677 qpair failed and we were unable to recover it. 00:29:06.677 [2024-07-24 20:08:54.370152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.677 [2024-07-24 20:08:54.370159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.677 qpair failed and we were unable to recover it. 00:29:06.677 [2024-07-24 20:08:54.370580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.677 [2024-07-24 20:08:54.370587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.677 qpair failed and we were unable to recover it. 00:29:06.677 [2024-07-24 20:08:54.371011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.677 [2024-07-24 20:08:54.371019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.677 qpair failed and we were unable to recover it. 00:29:06.678 [2024-07-24 20:08:54.371544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.678 [2024-07-24 20:08:54.371571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.678 qpair failed and we were unable to recover it. 00:29:06.678 [2024-07-24 20:08:54.371989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.678 [2024-07-24 20:08:54.371997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.678 qpair failed and we were unable to recover it. 00:29:06.678 [2024-07-24 20:08:54.372528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.678 [2024-07-24 20:08:54.372556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.678 qpair failed and we were unable to recover it. 00:29:06.678 [2024-07-24 20:08:54.372974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.678 [2024-07-24 20:08:54.372982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.678 qpair failed and we were unable to recover it. 00:29:06.678 [2024-07-24 20:08:54.373479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.678 [2024-07-24 20:08:54.373506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.678 qpair failed and we were unable to recover it. 00:29:06.678 [2024-07-24 20:08:54.373920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.678 [2024-07-24 20:08:54.373928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.678 qpair failed and we were unable to recover it. 00:29:06.678 [2024-07-24 20:08:54.374431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.678 [2024-07-24 20:08:54.374458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.678 qpair failed and we were unable to recover it. 00:29:06.678 [2024-07-24 20:08:54.374871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.678 [2024-07-24 20:08:54.374880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.678 qpair failed and we were unable to recover it. 00:29:06.678 [2024-07-24 20:08:54.375331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.678 [2024-07-24 20:08:54.375338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.678 qpair failed and we were unable to recover it. 00:29:06.678 [2024-07-24 20:08:54.375733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.678 [2024-07-24 20:08:54.375739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.678 qpair failed and we were unable to recover it. 00:29:06.678 [2024-07-24 20:08:54.376159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.678 [2024-07-24 20:08:54.376166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.678 qpair failed and we were unable to recover it. 00:29:06.678 [2024-07-24 20:08:54.376584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.678 [2024-07-24 20:08:54.376591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.678 qpair failed and we were unable to recover it. 00:29:06.678 [2024-07-24 20:08:54.377035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.678 [2024-07-24 20:08:54.377042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.678 qpair failed and we were unable to recover it. 00:29:06.678 [2024-07-24 20:08:54.377540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.678 [2024-07-24 20:08:54.377568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.678 qpair failed and we were unable to recover it. 00:29:06.678 [2024-07-24 20:08:54.377985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.678 [2024-07-24 20:08:54.377994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.678 qpair failed and we were unable to recover it. 00:29:06.678 [2024-07-24 20:08:54.378507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.678 [2024-07-24 20:08:54.378535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.678 qpair failed and we were unable to recover it. 00:29:06.678 [2024-07-24 20:08:54.378955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.678 [2024-07-24 20:08:54.378963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.678 qpair failed and we were unable to recover it. 00:29:06.678 [2024-07-24 20:08:54.379492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.678 [2024-07-24 20:08:54.379519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.678 qpair failed and we were unable to recover it. 00:29:06.678 [2024-07-24 20:08:54.379936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.678 [2024-07-24 20:08:54.379944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.678 qpair failed and we were unable to recover it. 00:29:06.678 [2024-07-24 20:08:54.380480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.678 [2024-07-24 20:08:54.380508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.678 qpair failed and we were unable to recover it. 00:29:06.678 [2024-07-24 20:08:54.381004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.678 [2024-07-24 20:08:54.381013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.678 qpair failed and we were unable to recover it. 00:29:06.678 [2024-07-24 20:08:54.381417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.678 [2024-07-24 20:08:54.381444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.678 qpair failed and we were unable to recover it. 00:29:06.678 [2024-07-24 20:08:54.381880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.678 [2024-07-24 20:08:54.381889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.678 qpair failed and we were unable to recover it. 00:29:06.678 [2024-07-24 20:08:54.382391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.678 [2024-07-24 20:08:54.382419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.678 qpair failed and we were unable to recover it. 00:29:06.678 [2024-07-24 20:08:54.382838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.678 [2024-07-24 20:08:54.382847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.678 qpair failed and we were unable to recover it. 00:29:06.678 [2024-07-24 20:08:54.383275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.678 [2024-07-24 20:08:54.383283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.678 qpair failed and we were unable to recover it. 00:29:06.678 [2024-07-24 20:08:54.383609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.678 [2024-07-24 20:08:54.383616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.678 qpair failed and we were unable to recover it. 00:29:06.678 [2024-07-24 20:08:54.384091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.678 [2024-07-24 20:08:54.384098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.678 qpair failed and we were unable to recover it. 00:29:06.678 [2024-07-24 20:08:54.384567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.678 [2024-07-24 20:08:54.384577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.678 qpair failed and we were unable to recover it. 00:29:06.678 [2024-07-24 20:08:54.384910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.678 [2024-07-24 20:08:54.384917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.678 qpair failed and we were unable to recover it. 00:29:06.678 [2024-07-24 20:08:54.385326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.678 [2024-07-24 20:08:54.385333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.678 qpair failed and we were unable to recover it. 00:29:06.678 [2024-07-24 20:08:54.385756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.678 [2024-07-24 20:08:54.385763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.678 qpair failed and we were unable to recover it. 00:29:06.678 [2024-07-24 20:08:54.386247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.678 [2024-07-24 20:08:54.386254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.678 qpair failed and we were unable to recover it. 00:29:06.678 [2024-07-24 20:08:54.386590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.678 [2024-07-24 20:08:54.386596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.678 qpair failed and we were unable to recover it. 00:29:06.678 [2024-07-24 20:08:54.387022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.678 [2024-07-24 20:08:54.387029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.678 qpair failed and we were unable to recover it. 00:29:06.678 [2024-07-24 20:08:54.387453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.678 [2024-07-24 20:08:54.387460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.678 qpair failed and we were unable to recover it. 00:29:06.679 [2024-07-24 20:08:54.387862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.679 [2024-07-24 20:08:54.387869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.679 qpair failed and we were unable to recover it. 00:29:06.679 [2024-07-24 20:08:54.388178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.679 [2024-07-24 20:08:54.388185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.679 qpair failed and we were unable to recover it. 00:29:06.679 [2024-07-24 20:08:54.388600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.679 [2024-07-24 20:08:54.388607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.679 qpair failed and we were unable to recover it. 00:29:06.679 [2024-07-24 20:08:54.389006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.679 [2024-07-24 20:08:54.389012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.679 qpair failed and we were unable to recover it. 00:29:06.679 [2024-07-24 20:08:54.389547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.679 [2024-07-24 20:08:54.389574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.679 qpair failed and we were unable to recover it. 00:29:06.679 [2024-07-24 20:08:54.389991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.679 [2024-07-24 20:08:54.389999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.679 qpair failed and we were unable to recover it. 00:29:06.679 [2024-07-24 20:08:54.390519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.679 [2024-07-24 20:08:54.390547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.679 qpair failed and we were unable to recover it. 00:29:06.679 [2024-07-24 20:08:54.390960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.679 [2024-07-24 20:08:54.390969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.679 qpair failed and we were unable to recover it. 00:29:06.679 [2024-07-24 20:08:54.391522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.679 [2024-07-24 20:08:54.391549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.679 qpair failed and we were unable to recover it. 00:29:06.679 [2024-07-24 20:08:54.391967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.679 [2024-07-24 20:08:54.391976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.679 qpair failed and we were unable to recover it. 00:29:06.679 [2024-07-24 20:08:54.392495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.679 [2024-07-24 20:08:54.392523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.679 qpair failed and we were unable to recover it. 00:29:06.679 [2024-07-24 20:08:54.393024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.679 [2024-07-24 20:08:54.393032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.679 qpair failed and we were unable to recover it. 00:29:06.679 [2024-07-24 20:08:54.393546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.679 [2024-07-24 20:08:54.393573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.679 qpair failed and we were unable to recover it. 00:29:06.679 [2024-07-24 20:08:54.393999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.679 [2024-07-24 20:08:54.394008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.679 qpair failed and we were unable to recover it. 00:29:06.679 [2024-07-24 20:08:54.394514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.679 [2024-07-24 20:08:54.394542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.679 qpair failed and we were unable to recover it. 00:29:06.679 [2024-07-24 20:08:54.394969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.679 [2024-07-24 20:08:54.394977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.679 qpair failed and we were unable to recover it. 00:29:06.679 [2024-07-24 20:08:54.395492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.679 [2024-07-24 20:08:54.395520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.679 qpair failed and we were unable to recover it. 00:29:06.679 [2024-07-24 20:08:54.395936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.679 [2024-07-24 20:08:54.395945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.679 qpair failed and we were unable to recover it. 00:29:06.679 [2024-07-24 20:08:54.396478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.679 [2024-07-24 20:08:54.396505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.679 qpair failed and we were unable to recover it. 00:29:06.679 [2024-07-24 20:08:54.396923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.679 [2024-07-24 20:08:54.396931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.679 qpair failed and we were unable to recover it. 00:29:06.679 [2024-07-24 20:08:54.397414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.679 [2024-07-24 20:08:54.397441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.679 qpair failed and we were unable to recover it. 00:29:06.679 [2024-07-24 20:08:54.397861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.679 [2024-07-24 20:08:54.397869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.679 qpair failed and we were unable to recover it. 00:29:06.679 [2024-07-24 20:08:54.398299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.679 [2024-07-24 20:08:54.398306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.679 qpair failed and we were unable to recover it. 00:29:06.679 [2024-07-24 20:08:54.398737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.679 [2024-07-24 20:08:54.398744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.679 qpair failed and we were unable to recover it. 00:29:06.679 [2024-07-24 20:08:54.399196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.679 [2024-07-24 20:08:54.399208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.679 qpair failed and we were unable to recover it. 00:29:06.679 [2024-07-24 20:08:54.399625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.679 [2024-07-24 20:08:54.399632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.679 qpair failed and we were unable to recover it. 00:29:06.679 [2024-07-24 20:08:54.400036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.679 [2024-07-24 20:08:54.400043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.679 qpair failed and we were unable to recover it. 00:29:06.679 [2024-07-24 20:08:54.400453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.679 [2024-07-24 20:08:54.400480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.679 qpair failed and we were unable to recover it. 00:29:06.679 [2024-07-24 20:08:54.400928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.679 [2024-07-24 20:08:54.400937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.679 qpair failed and we were unable to recover it. 00:29:06.679 [2024-07-24 20:08:54.401448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.679 [2024-07-24 20:08:54.401475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.679 qpair failed and we were unable to recover it. 00:29:06.680 [2024-07-24 20:08:54.401900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.680 [2024-07-24 20:08:54.401908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.680 qpair failed and we were unable to recover it. 00:29:06.680 [2024-07-24 20:08:54.402407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.680 [2024-07-24 20:08:54.402434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.680 qpair failed and we were unable to recover it. 00:29:06.680 [2024-07-24 20:08:54.402853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.680 [2024-07-24 20:08:54.402864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.680 qpair failed and we were unable to recover it. 00:29:06.680 [2024-07-24 20:08:54.403071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.680 [2024-07-24 20:08:54.403082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.680 qpair failed and we were unable to recover it. 00:29:06.680 [2024-07-24 20:08:54.403600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.680 [2024-07-24 20:08:54.403608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.680 qpair failed and we were unable to recover it. 00:29:06.680 [2024-07-24 20:08:54.404026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.680 [2024-07-24 20:08:54.404034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.680 qpair failed and we were unable to recover it. 00:29:06.680 [2024-07-24 20:08:54.404483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.680 [2024-07-24 20:08:54.404490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.680 qpair failed and we were unable to recover it. 00:29:06.680 [2024-07-24 20:08:54.404895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.680 [2024-07-24 20:08:54.404902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.680 qpair failed and we were unable to recover it. 00:29:06.680 [2024-07-24 20:08:54.405108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.680 [2024-07-24 20:08:54.405116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.680 qpair failed and we were unable to recover it. 00:29:06.680 [2024-07-24 20:08:54.405527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.680 [2024-07-24 20:08:54.405534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.680 qpair failed and we were unable to recover it. 00:29:06.680 [2024-07-24 20:08:54.405981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.680 [2024-07-24 20:08:54.405987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.680 qpair failed and we were unable to recover it. 00:29:06.680 [2024-07-24 20:08:54.406395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.680 [2024-07-24 20:08:54.406402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.680 qpair failed and we were unable to recover it. 00:29:06.680 [2024-07-24 20:08:54.406895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.680 [2024-07-24 20:08:54.406901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.680 qpair failed and we were unable to recover it. 00:29:06.680 [2024-07-24 20:08:54.407402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.680 [2024-07-24 20:08:54.407430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.680 qpair failed and we were unable to recover it. 00:29:06.680 [2024-07-24 20:08:54.407847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.680 [2024-07-24 20:08:54.407856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.680 qpair failed and we were unable to recover it. 00:29:06.680 [2024-07-24 20:08:54.408257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.680 [2024-07-24 20:08:54.408265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.680 qpair failed and we were unable to recover it. 00:29:06.680 [2024-07-24 20:08:54.408687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.680 [2024-07-24 20:08:54.408694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.680 qpair failed and we were unable to recover it. 00:29:06.680 [2024-07-24 20:08:54.409109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.680 [2024-07-24 20:08:54.409115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.680 qpair failed and we were unable to recover it. 00:29:06.680 [2024-07-24 20:08:54.409589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.680 [2024-07-24 20:08:54.409596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.680 qpair failed and we were unable to recover it. 00:29:06.680 [2024-07-24 20:08:54.409998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.680 [2024-07-24 20:08:54.410005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.680 qpair failed and we were unable to recover it. 00:29:06.680 [2024-07-24 20:08:54.410406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.680 [2024-07-24 20:08:54.410413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.680 qpair failed and we were unable to recover it. 00:29:06.680 [2024-07-24 20:08:54.410869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.680 [2024-07-24 20:08:54.410876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.680 qpair failed and we were unable to recover it. 00:29:06.680 [2024-07-24 20:08:54.411456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.680 [2024-07-24 20:08:54.411483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.680 qpair failed and we were unable to recover it. 00:29:06.680 [2024-07-24 20:08:54.411902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.680 [2024-07-24 20:08:54.411910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.680 qpair failed and we were unable to recover it. 00:29:06.680 [2024-07-24 20:08:54.412314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.680 [2024-07-24 20:08:54.412321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.680 qpair failed and we were unable to recover it. 00:29:06.680 [2024-07-24 20:08:54.412728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.680 [2024-07-24 20:08:54.412734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.680 qpair failed and we were unable to recover it. 00:29:06.680 [2024-07-24 20:08:54.413140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.680 [2024-07-24 20:08:54.413147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.680 qpair failed and we were unable to recover it. 00:29:06.680 [2024-07-24 20:08:54.413563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.680 [2024-07-24 20:08:54.413570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.680 qpair failed and we were unable to recover it. 00:29:06.680 [2024-07-24 20:08:54.413981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.680 [2024-07-24 20:08:54.413988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.680 qpair failed and we were unable to recover it. 00:29:06.680 [2024-07-24 20:08:54.414512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.680 [2024-07-24 20:08:54.414539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.680 qpair failed and we were unable to recover it. 00:29:06.680 [2024-07-24 20:08:54.414997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.680 [2024-07-24 20:08:54.415007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.680 qpair failed and we were unable to recover it. 00:29:06.680 [2024-07-24 20:08:54.415515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.680 [2024-07-24 20:08:54.415543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.680 qpair failed and we were unable to recover it. 00:29:06.680 [2024-07-24 20:08:54.415960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.680 [2024-07-24 20:08:54.415968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.680 qpair failed and we were unable to recover it. 00:29:06.680 [2024-07-24 20:08:54.416511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.680 [2024-07-24 20:08:54.416538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.680 qpair failed and we were unable to recover it. 00:29:06.680 [2024-07-24 20:08:54.416956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.680 [2024-07-24 20:08:54.416965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.680 qpair failed and we were unable to recover it. 00:29:06.680 [2024-07-24 20:08:54.417463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.680 [2024-07-24 20:08:54.417491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.680 qpair failed and we were unable to recover it. 00:29:06.680 [2024-07-24 20:08:54.417911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.680 [2024-07-24 20:08:54.417920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.680 qpair failed and we were unable to recover it. 00:29:06.680 [2024-07-24 20:08:54.418483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.681 [2024-07-24 20:08:54.418510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.681 qpair failed and we were unable to recover it. 00:29:06.681 [2024-07-24 20:08:54.418800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.681 [2024-07-24 20:08:54.418810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.681 qpair failed and we were unable to recover it. 00:29:06.681 [2024-07-24 20:08:54.419232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.681 [2024-07-24 20:08:54.419239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.681 qpair failed and we were unable to recover it. 00:29:06.681 [2024-07-24 20:08:54.419681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.681 [2024-07-24 20:08:54.419688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.681 qpair failed and we were unable to recover it. 00:29:06.681 [2024-07-24 20:08:54.420060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.681 [2024-07-24 20:08:54.420067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.681 qpair failed and we were unable to recover it. 00:29:06.681 [2024-07-24 20:08:54.420484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.681 [2024-07-24 20:08:54.420494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.681 qpair failed and we were unable to recover it. 00:29:06.681 [2024-07-24 20:08:54.420920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.681 [2024-07-24 20:08:54.420928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.681 qpair failed and we were unable to recover it. 00:29:06.681 [2024-07-24 20:08:54.421439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.681 [2024-07-24 20:08:54.421467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.681 qpair failed and we were unable to recover it. 00:29:06.681 [2024-07-24 20:08:54.421897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.681 [2024-07-24 20:08:54.421906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.681 qpair failed and we were unable to recover it. 00:29:06.681 [2024-07-24 20:08:54.422308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.681 [2024-07-24 20:08:54.422316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.681 qpair failed and we were unable to recover it. 00:29:06.681 [2024-07-24 20:08:54.422725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.681 [2024-07-24 20:08:54.422732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.681 qpair failed and we were unable to recover it. 00:29:06.681 [2024-07-24 20:08:54.423152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.681 [2024-07-24 20:08:54.423159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.681 qpair failed and we were unable to recover it. 00:29:06.681 [2024-07-24 20:08:54.423532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.681 [2024-07-24 20:08:54.423538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.681 qpair failed and we were unable to recover it. 00:29:06.681 [2024-07-24 20:08:54.423981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.681 [2024-07-24 20:08:54.423988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.681 qpair failed and we were unable to recover it. 00:29:06.681 [2024-07-24 20:08:54.424533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.681 [2024-07-24 20:08:54.424561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.681 qpair failed and we were unable to recover it. 00:29:06.681 [2024-07-24 20:08:54.424977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.681 [2024-07-24 20:08:54.424986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.681 qpair failed and we were unable to recover it. 00:29:06.681 [2024-07-24 20:08:54.425509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.681 [2024-07-24 20:08:54.425536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.681 qpair failed and we were unable to recover it. 00:29:06.681 [2024-07-24 20:08:54.425956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.681 [2024-07-24 20:08:54.425964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.681 qpair failed and we were unable to recover it. 00:29:06.681 [2024-07-24 20:08:54.426480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.681 [2024-07-24 20:08:54.426507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.681 qpair failed and we were unable to recover it. 00:29:06.681 [2024-07-24 20:08:54.426918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.681 [2024-07-24 20:08:54.426926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.681 qpair failed and we were unable to recover it. 00:29:06.681 [2024-07-24 20:08:54.427419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.681 [2024-07-24 20:08:54.427446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.681 qpair failed and we were unable to recover it. 00:29:06.681 [2024-07-24 20:08:54.427865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.681 [2024-07-24 20:08:54.427874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.681 qpair failed and we were unable to recover it. 00:29:06.681 [2024-07-24 20:08:54.428277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.681 [2024-07-24 20:08:54.428285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.681 qpair failed and we were unable to recover it. 00:29:06.681 [2024-07-24 20:08:54.428609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.681 [2024-07-24 20:08:54.428616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.681 qpair failed and we were unable to recover it. 00:29:06.681 [2024-07-24 20:08:54.428939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.681 [2024-07-24 20:08:54.428945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.681 qpair failed and we were unable to recover it. 00:29:06.681 [2024-07-24 20:08:54.429372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.681 [2024-07-24 20:08:54.429379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.681 qpair failed and we were unable to recover it. 00:29:06.681 [2024-07-24 20:08:54.429709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.681 [2024-07-24 20:08:54.429717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.681 qpair failed and we were unable to recover it. 00:29:06.681 [2024-07-24 20:08:54.430136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.681 [2024-07-24 20:08:54.430143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.681 qpair failed and we were unable to recover it. 00:29:06.681 [2024-07-24 20:08:54.430547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.681 [2024-07-24 20:08:54.430554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.681 qpair failed and we were unable to recover it. 00:29:06.681 [2024-07-24 20:08:54.430956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.681 [2024-07-24 20:08:54.430964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.681 qpair failed and we were unable to recover it. 00:29:06.681 [2024-07-24 20:08:54.431438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.681 [2024-07-24 20:08:54.431445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.681 qpair failed and we were unable to recover it. 00:29:06.681 [2024-07-24 20:08:54.431848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.681 [2024-07-24 20:08:54.431854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.681 qpair failed and we were unable to recover it. 00:29:06.681 [2024-07-24 20:08:54.432259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.681 [2024-07-24 20:08:54.432266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.681 qpair failed and we were unable to recover it. 00:29:06.681 [2024-07-24 20:08:54.432640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.681 [2024-07-24 20:08:54.432646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.681 qpair failed and we were unable to recover it. 00:29:06.681 [2024-07-24 20:08:54.433054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.681 [2024-07-24 20:08:54.433060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.681 qpair failed and we were unable to recover it. 00:29:06.681 [2024-07-24 20:08:54.433394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.681 [2024-07-24 20:08:54.433402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.681 qpair failed and we were unable to recover it. 00:29:06.681 [2024-07-24 20:08:54.433694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.682 [2024-07-24 20:08:54.433701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.682 qpair failed and we were unable to recover it. 00:29:06.682 [2024-07-24 20:08:54.434123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.682 [2024-07-24 20:08:54.434130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.682 qpair failed and we were unable to recover it. 00:29:06.682 [2024-07-24 20:08:54.434594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.682 [2024-07-24 20:08:54.434601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.682 qpair failed and we were unable to recover it. 00:29:06.682 [2024-07-24 20:08:54.435001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.682 [2024-07-24 20:08:54.435008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.682 qpair failed and we were unable to recover it. 00:29:06.682 [2024-07-24 20:08:54.435429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.682 [2024-07-24 20:08:54.435436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.682 qpair failed and we were unable to recover it. 00:29:06.682 [2024-07-24 20:08:54.435860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.682 [2024-07-24 20:08:54.435867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.682 qpair failed and we were unable to recover it. 00:29:06.682 [2024-07-24 20:08:54.436417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.682 [2024-07-24 20:08:54.436444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.682 qpair failed and we were unable to recover it. 00:29:06.682 [2024-07-24 20:08:54.436865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.682 [2024-07-24 20:08:54.436874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.682 qpair failed and we were unable to recover it. 00:29:06.682 [2024-07-24 20:08:54.437286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.682 [2024-07-24 20:08:54.437293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.682 qpair failed and we were unable to recover it. 00:29:06.682 [2024-07-24 20:08:54.437700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.682 [2024-07-24 20:08:54.437709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.682 qpair failed and we were unable to recover it. 00:29:06.682 [2024-07-24 20:08:54.438155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.682 [2024-07-24 20:08:54.438162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.682 qpair failed and we were unable to recover it. 00:29:06.682 [2024-07-24 20:08:54.438587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.682 [2024-07-24 20:08:54.438594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.682 qpair failed and we were unable to recover it. 00:29:06.682 [2024-07-24 20:08:54.439043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.682 [2024-07-24 20:08:54.439051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.682 qpair failed and we were unable to recover it. 00:29:06.682 [2024-07-24 20:08:54.439416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.682 [2024-07-24 20:08:54.439443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.682 qpair failed and we were unable to recover it. 00:29:06.682 [2024-07-24 20:08:54.439851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.682 [2024-07-24 20:08:54.439859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.682 qpair failed and we were unable to recover it. 00:29:06.682 [2024-07-24 20:08:54.440270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.682 [2024-07-24 20:08:54.440277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.682 qpair failed and we were unable to recover it. 00:29:06.682 [2024-07-24 20:08:54.440745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.682 [2024-07-24 20:08:54.440752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.682 qpair failed and we were unable to recover it. 00:29:06.682 [2024-07-24 20:08:54.441157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.682 [2024-07-24 20:08:54.441164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.682 qpair failed and we were unable to recover it. 00:29:06.682 [2024-07-24 20:08:54.441573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.682 [2024-07-24 20:08:54.441580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.682 qpair failed and we were unable to recover it. 00:29:06.682 [2024-07-24 20:08:54.442027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.682 [2024-07-24 20:08:54.442033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.682 qpair failed and we were unable to recover it. 00:29:06.682 [2024-07-24 20:08:54.442568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.682 [2024-07-24 20:08:54.442596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.682 qpair failed and we were unable to recover it. 00:29:06.682 [2024-07-24 20:08:54.443050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.682 [2024-07-24 20:08:54.443059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.682 qpair failed and we were unable to recover it. 00:29:06.682 [2024-07-24 20:08:54.443582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.682 [2024-07-24 20:08:54.443609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.682 qpair failed and we were unable to recover it. 00:29:06.682 [2024-07-24 20:08:54.443938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.682 [2024-07-24 20:08:54.443946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.682 qpair failed and we were unable to recover it. 00:29:06.682 [2024-07-24 20:08:54.444501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.682 [2024-07-24 20:08:54.444528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.682 qpair failed and we were unable to recover it. 00:29:06.682 [2024-07-24 20:08:54.444947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.682 [2024-07-24 20:08:54.444956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.682 qpair failed and we were unable to recover it. 00:29:06.682 [2024-07-24 20:08:54.445487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.682 [2024-07-24 20:08:54.445514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.682 qpair failed and we were unable to recover it. 00:29:06.682 [2024-07-24 20:08:54.445936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.682 [2024-07-24 20:08:54.445944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.682 qpair failed and we were unable to recover it. 00:29:06.682 [2024-07-24 20:08:54.446530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.682 [2024-07-24 20:08:54.446558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.682 qpair failed and we were unable to recover it. 00:29:06.682 [2024-07-24 20:08:54.446984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.682 [2024-07-24 20:08:54.446992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.682 qpair failed and we were unable to recover it. 00:29:06.682 [2024-07-24 20:08:54.447511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.682 [2024-07-24 20:08:54.447539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.682 qpair failed and we were unable to recover it. 00:29:06.682 [2024-07-24 20:08:54.447962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.682 [2024-07-24 20:08:54.447971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.682 qpair failed and we were unable to recover it. 00:29:06.682 [2024-07-24 20:08:54.448502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.682 [2024-07-24 20:08:54.448530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.682 qpair failed and we were unable to recover it. 00:29:06.682 [2024-07-24 20:08:54.448973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.682 [2024-07-24 20:08:54.448981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.682 qpair failed and we were unable to recover it. 00:29:06.682 [2024-07-24 20:08:54.449488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.682 [2024-07-24 20:08:54.449515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.682 qpair failed and we were unable to recover it. 00:29:06.682 [2024-07-24 20:08:54.449940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.682 [2024-07-24 20:08:54.449948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.682 qpair failed and we were unable to recover it. 00:29:06.683 [2024-07-24 20:08:54.450464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.683 [2024-07-24 20:08:54.450491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.683 qpair failed and we were unable to recover it. 00:29:06.683 [2024-07-24 20:08:54.450913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.683 [2024-07-24 20:08:54.450921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.683 qpair failed and we were unable to recover it. 00:29:06.683 [2024-07-24 20:08:54.451434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.683 [2024-07-24 20:08:54.451462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.683 qpair failed and we were unable to recover it. 00:29:06.683 [2024-07-24 20:08:54.451878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.683 [2024-07-24 20:08:54.451886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.683 qpair failed and we were unable to recover it. 00:29:06.683 [2024-07-24 20:08:54.452290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.683 [2024-07-24 20:08:54.452297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.683 qpair failed and we were unable to recover it. 00:29:06.683 [2024-07-24 20:08:54.452722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.683 [2024-07-24 20:08:54.452729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.683 qpair failed and we were unable to recover it. 00:29:06.683 [2024-07-24 20:08:54.452915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.683 [2024-07-24 20:08:54.452926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.683 qpair failed and we were unable to recover it. 00:29:06.683 [2024-07-24 20:08:54.453310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.683 [2024-07-24 20:08:54.453318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.683 qpair failed and we were unable to recover it. 00:29:06.683 [2024-07-24 20:08:54.453528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.683 [2024-07-24 20:08:54.453537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.683 qpair failed and we were unable to recover it. 00:29:06.683 [2024-07-24 20:08:54.453949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.683 [2024-07-24 20:08:54.453956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.683 qpair failed and we were unable to recover it. 00:29:06.683 [2024-07-24 20:08:54.454371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.683 [2024-07-24 20:08:54.454378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.683 qpair failed and we were unable to recover it. 00:29:06.683 [2024-07-24 20:08:54.454835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.683 [2024-07-24 20:08:54.454841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.683 qpair failed and we were unable to recover it. 00:29:06.683 [2024-07-24 20:08:54.455246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.683 [2024-07-24 20:08:54.455253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.683 qpair failed and we were unable to recover it. 00:29:06.683 [2024-07-24 20:08:54.455655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.683 [2024-07-24 20:08:54.455664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.683 qpair failed and we were unable to recover it. 00:29:06.683 [2024-07-24 20:08:54.456063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.683 [2024-07-24 20:08:54.456070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.683 qpair failed and we were unable to recover it. 00:29:06.683 [2024-07-24 20:08:54.456496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.683 [2024-07-24 20:08:54.456504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.683 qpair failed and we were unable to recover it. 00:29:06.683 [2024-07-24 20:08:54.456835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.683 [2024-07-24 20:08:54.456842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.683 qpair failed and we were unable to recover it. 00:29:06.683 [2024-07-24 20:08:54.457245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.683 [2024-07-24 20:08:54.457252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.683 qpair failed and we were unable to recover it. 00:29:06.683 [2024-07-24 20:08:54.457691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.683 [2024-07-24 20:08:54.457698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.683 qpair failed and we were unable to recover it. 00:29:06.683 [2024-07-24 20:08:54.458096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.683 [2024-07-24 20:08:54.458103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.683 qpair failed and we were unable to recover it. 00:29:06.683 [2024-07-24 20:08:54.458308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.683 [2024-07-24 20:08:54.458317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.683 qpair failed and we were unable to recover it. 00:29:06.683 [2024-07-24 20:08:54.458495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.683 [2024-07-24 20:08:54.458503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.683 qpair failed and we were unable to recover it. 00:29:06.683 [2024-07-24 20:08:54.458917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.683 [2024-07-24 20:08:54.458923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.683 qpair failed and we were unable to recover it. 00:29:06.683 [2024-07-24 20:08:54.459318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.683 [2024-07-24 20:08:54.459324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.683 qpair failed and we were unable to recover it. 00:29:06.683 [2024-07-24 20:08:54.459736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.683 [2024-07-24 20:08:54.459743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.683 qpair failed and we were unable to recover it. 00:29:06.683 [2024-07-24 20:08:54.460145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.683 [2024-07-24 20:08:54.460151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.683 qpair failed and we were unable to recover it. 00:29:06.683 [2024-07-24 20:08:54.460616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.683 [2024-07-24 20:08:54.460623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.683 qpair failed and we were unable to recover it. 00:29:06.683 [2024-07-24 20:08:54.460970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.683 [2024-07-24 20:08:54.460978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.683 qpair failed and we were unable to recover it. 00:29:06.683 [2024-07-24 20:08:54.461400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.683 [2024-07-24 20:08:54.461407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.683 qpair failed and we were unable to recover it. 00:29:06.683 [2024-07-24 20:08:54.461808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.683 [2024-07-24 20:08:54.461815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.683 qpair failed and we were unable to recover it. 00:29:06.683 [2024-07-24 20:08:54.462223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.683 [2024-07-24 20:08:54.462230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.683 qpair failed and we were unable to recover it. 00:29:06.683 [2024-07-24 20:08:54.462632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.683 [2024-07-24 20:08:54.462639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.683 qpair failed and we were unable to recover it. 00:29:06.683 [2024-07-24 20:08:54.463064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.683 [2024-07-24 20:08:54.463070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.683 qpair failed and we were unable to recover it. 00:29:06.683 [2024-07-24 20:08:54.463478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.683 [2024-07-24 20:08:54.463485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.683 qpair failed and we were unable to recover it. 00:29:06.683 [2024-07-24 20:08:54.463686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.683 [2024-07-24 20:08:54.463695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.684 qpair failed and we were unable to recover it. 00:29:06.684 [2024-07-24 20:08:54.464077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.684 [2024-07-24 20:08:54.464084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.684 qpair failed and we were unable to recover it. 00:29:06.684 [2024-07-24 20:08:54.464495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.684 [2024-07-24 20:08:54.464502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.684 qpair failed and we were unable to recover it. 00:29:06.684 [2024-07-24 20:08:54.464893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.684 [2024-07-24 20:08:54.464900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.684 qpair failed and we were unable to recover it. 00:29:06.684 [2024-07-24 20:08:54.465421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.684 [2024-07-24 20:08:54.465449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.684 qpair failed and we were unable to recover it. 00:29:06.684 [2024-07-24 20:08:54.465869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.684 [2024-07-24 20:08:54.465877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.684 qpair failed and we were unable to recover it. 00:29:06.684 [2024-07-24 20:08:54.466186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.684 [2024-07-24 20:08:54.466194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.684 qpair failed and we were unable to recover it. 00:29:06.684 [2024-07-24 20:08:54.466648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.684 [2024-07-24 20:08:54.466655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.684 qpair failed and we were unable to recover it. 00:29:06.684 [2024-07-24 20:08:54.467111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.684 [2024-07-24 20:08:54.467117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.684 qpair failed and we were unable to recover it. 00:29:06.684 [2024-07-24 20:08:54.467534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.684 [2024-07-24 20:08:54.467541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.684 qpair failed and we were unable to recover it. 00:29:06.684 [2024-07-24 20:08:54.467945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.684 [2024-07-24 20:08:54.467952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.684 qpair failed and we were unable to recover it. 00:29:06.684 [2024-07-24 20:08:54.468474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.684 [2024-07-24 20:08:54.468501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.684 qpair failed and we were unable to recover it. 00:29:06.684 [2024-07-24 20:08:54.468917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.684 [2024-07-24 20:08:54.468925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.684 qpair failed and we were unable to recover it. 00:29:06.684 [2024-07-24 20:08:54.469457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.684 [2024-07-24 20:08:54.469484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.684 qpair failed and we were unable to recover it. 00:29:06.684 [2024-07-24 20:08:54.469903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.684 [2024-07-24 20:08:54.469911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.684 qpair failed and we were unable to recover it. 00:29:06.684 [2024-07-24 20:08:54.470414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.684 [2024-07-24 20:08:54.470441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.684 qpair failed and we were unable to recover it. 00:29:06.684 [2024-07-24 20:08:54.470861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.684 [2024-07-24 20:08:54.470870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.684 qpair failed and we were unable to recover it. 00:29:06.684 [2024-07-24 20:08:54.471274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.684 [2024-07-24 20:08:54.471281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.684 qpair failed and we were unable to recover it. 00:29:06.684 [2024-07-24 20:08:54.471611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.684 [2024-07-24 20:08:54.471617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.684 qpair failed and we were unable to recover it. 00:29:06.684 [2024-07-24 20:08:54.472054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.684 [2024-07-24 20:08:54.472063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.684 qpair failed and we were unable to recover it. 00:29:06.684 [2024-07-24 20:08:54.472460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.684 [2024-07-24 20:08:54.472467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.684 qpair failed and we were unable to recover it. 00:29:06.684 [2024-07-24 20:08:54.472895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.684 [2024-07-24 20:08:54.472902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.684 qpair failed and we were unable to recover it. 00:29:06.684 [2024-07-24 20:08:54.473322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.684 [2024-07-24 20:08:54.473330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.684 qpair failed and we were unable to recover it. 00:29:06.684 [2024-07-24 20:08:54.473761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.684 [2024-07-24 20:08:54.473768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.684 qpair failed and we were unable to recover it. 00:29:06.684 [2024-07-24 20:08:54.474190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.684 [2024-07-24 20:08:54.474198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.684 qpair failed and we were unable to recover it. 00:29:06.684 [2024-07-24 20:08:54.474625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.684 [2024-07-24 20:08:54.474632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.684 qpair failed and we were unable to recover it. 00:29:06.684 [2024-07-24 20:08:54.475035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.684 [2024-07-24 20:08:54.475041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.684 qpair failed and we were unable to recover it. 00:29:06.684 [2024-07-24 20:08:54.475549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.684 [2024-07-24 20:08:54.475576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.684 qpair failed and we were unable to recover it. 00:29:06.684 [2024-07-24 20:08:54.475902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.684 [2024-07-24 20:08:54.475910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.684 qpair failed and we were unable to recover it. 00:29:06.684 [2024-07-24 20:08:54.476471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.684 [2024-07-24 20:08:54.476498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.684 qpair failed and we were unable to recover it. 00:29:06.684 [2024-07-24 20:08:54.476831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.684 [2024-07-24 20:08:54.476839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.685 qpair failed and we were unable to recover it. 00:29:06.685 [2024-07-24 20:08:54.477255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.685 [2024-07-24 20:08:54.477262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.685 qpair failed and we were unable to recover it. 00:29:06.685 [2024-07-24 20:08:54.477693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.685 [2024-07-24 20:08:54.477701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.685 qpair failed and we were unable to recover it. 00:29:06.685 [2024-07-24 20:08:54.478103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.685 [2024-07-24 20:08:54.478110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.685 qpair failed and we were unable to recover it. 00:29:06.685 [2024-07-24 20:08:54.478556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.685 [2024-07-24 20:08:54.478564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.685 qpair failed and we were unable to recover it. 00:29:06.685 [2024-07-24 20:08:54.478965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.685 [2024-07-24 20:08:54.478973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.685 qpair failed and we were unable to recover it. 00:29:06.685 [2024-07-24 20:08:54.479396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.685 [2024-07-24 20:08:54.479403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.685 qpair failed and we were unable to recover it. 00:29:06.685 [2024-07-24 20:08:54.479823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.685 [2024-07-24 20:08:54.479830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.685 qpair failed and we were unable to recover it. 00:29:06.685 [2024-07-24 20:08:54.480275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.685 [2024-07-24 20:08:54.480282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.685 qpair failed and we were unable to recover it. 00:29:06.685 [2024-07-24 20:08:54.480686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.685 [2024-07-24 20:08:54.480693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.685 qpair failed and we were unable to recover it. 00:29:06.685 [2024-07-24 20:08:54.481102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.685 [2024-07-24 20:08:54.481109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.685 qpair failed and we were unable to recover it. 00:29:06.685 [2024-07-24 20:08:54.481532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.685 [2024-07-24 20:08:54.481540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.685 qpair failed and we were unable to recover it. 00:29:06.685 [2024-07-24 20:08:54.481851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.685 [2024-07-24 20:08:54.481858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.685 qpair failed and we were unable to recover it. 00:29:06.685 [2024-07-24 20:08:54.482295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.685 [2024-07-24 20:08:54.482302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.685 qpair failed and we were unable to recover it. 00:29:06.685 [2024-07-24 20:08:54.482726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.685 [2024-07-24 20:08:54.482733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.685 qpair failed and we were unable to recover it. 00:29:06.685 [2024-07-24 20:08:54.483143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.685 [2024-07-24 20:08:54.483149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.685 qpair failed and we were unable to recover it. 00:29:06.685 [2024-07-24 20:08:54.483593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.685 [2024-07-24 20:08:54.483600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.685 qpair failed and we were unable to recover it. 00:29:06.685 [2024-07-24 20:08:54.484001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.685 [2024-07-24 20:08:54.484007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.685 qpair failed and we were unable to recover it. 00:29:06.685 [2024-07-24 20:08:54.484332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.685 [2024-07-24 20:08:54.484339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.685 qpair failed and we were unable to recover it. 00:29:06.685 [2024-07-24 20:08:54.484642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.685 [2024-07-24 20:08:54.484650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.685 qpair failed and we were unable to recover it. 00:29:06.685 [2024-07-24 20:08:54.485113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.685 [2024-07-24 20:08:54.485121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.685 qpair failed and we were unable to recover it. 00:29:06.685 [2024-07-24 20:08:54.485454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.685 [2024-07-24 20:08:54.485461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.685 qpair failed and we were unable to recover it. 00:29:06.685 [2024-07-24 20:08:54.485781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.685 [2024-07-24 20:08:54.485788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.685 qpair failed and we were unable to recover it. 00:29:06.685 [2024-07-24 20:08:54.486204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.685 [2024-07-24 20:08:54.486212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.685 qpair failed and we were unable to recover it. 00:29:06.685 [2024-07-24 20:08:54.486347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.685 [2024-07-24 20:08:54.486354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.685 qpair failed and we were unable to recover it. 00:29:06.685 [2024-07-24 20:08:54.486744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.685 [2024-07-24 20:08:54.486750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.685 qpair failed and we were unable to recover it. 00:29:06.685 [2024-07-24 20:08:54.487154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.685 [2024-07-24 20:08:54.487160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.685 qpair failed and we were unable to recover it. 00:29:06.685 [2024-07-24 20:08:54.487603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.685 [2024-07-24 20:08:54.487609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.685 qpair failed and we were unable to recover it. 00:29:06.685 [2024-07-24 20:08:54.488054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.685 [2024-07-24 20:08:54.488061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.685 qpair failed and we were unable to recover it. 00:29:06.685 [2024-07-24 20:08:54.488582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.685 [2024-07-24 20:08:54.488613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.685 qpair failed and we were unable to recover it. 00:29:06.685 [2024-07-24 20:08:54.489081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.685 [2024-07-24 20:08:54.489090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.685 qpair failed and we were unable to recover it. 00:29:06.685 [2024-07-24 20:08:54.489507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.685 [2024-07-24 20:08:54.489514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.685 qpair failed and we were unable to recover it. 00:29:06.685 [2024-07-24 20:08:54.489940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.685 [2024-07-24 20:08:54.489946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.685 qpair failed and we were unable to recover it. 00:29:06.685 [2024-07-24 20:08:54.490421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.685 [2024-07-24 20:08:54.490449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.685 qpair failed and we were unable to recover it. 00:29:06.685 [2024-07-24 20:08:54.490880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.685 [2024-07-24 20:08:54.490888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.685 qpair failed and we were unable to recover it. 00:29:06.685 [2024-07-24 20:08:54.491313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.685 [2024-07-24 20:08:54.491321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.685 qpair failed and we were unable to recover it. 00:29:06.685 [2024-07-24 20:08:54.491645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.686 [2024-07-24 20:08:54.491652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.686 qpair failed and we were unable to recover it. 00:29:06.686 [2024-07-24 20:08:54.492102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.686 [2024-07-24 20:08:54.492108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.686 qpair failed and we were unable to recover it. 00:29:06.686 [2024-07-24 20:08:54.492547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.686 [2024-07-24 20:08:54.492554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.686 qpair failed and we were unable to recover it. 00:29:06.686 [2024-07-24 20:08:54.492962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.686 [2024-07-24 20:08:54.492968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.686 qpair failed and we were unable to recover it. 00:29:06.686 [2024-07-24 20:08:54.493373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.686 [2024-07-24 20:08:54.493380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.686 qpair failed and we were unable to recover it. 00:29:06.686 [2024-07-24 20:08:54.493838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.686 [2024-07-24 20:08:54.493845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.686 qpair failed and we were unable to recover it. 00:29:06.686 [2024-07-24 20:08:54.494278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.686 [2024-07-24 20:08:54.494285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.686 qpair failed and we were unable to recover it. 00:29:06.686 [2024-07-24 20:08:54.494686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.686 [2024-07-24 20:08:54.494693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.686 qpair failed and we were unable to recover it. 00:29:06.686 [2024-07-24 20:08:54.495094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.686 [2024-07-24 20:08:54.495100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.686 qpair failed and we were unable to recover it. 00:29:06.686 [2024-07-24 20:08:54.495571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.686 [2024-07-24 20:08:54.495578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.686 qpair failed and we were unable to recover it. 00:29:06.686 [2024-07-24 20:08:54.496012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.686 [2024-07-24 20:08:54.496019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.686 qpair failed and we were unable to recover it. 00:29:06.686 [2024-07-24 20:08:54.496442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.686 [2024-07-24 20:08:54.496449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.686 qpair failed and we were unable to recover it. 00:29:06.686 [2024-07-24 20:08:54.496873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.686 [2024-07-24 20:08:54.496880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.686 qpair failed and we were unable to recover it. 00:29:06.686 [2024-07-24 20:08:54.497395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.686 [2024-07-24 20:08:54.497422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.686 qpair failed and we were unable to recover it. 00:29:06.686 [2024-07-24 20:08:54.497844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.686 [2024-07-24 20:08:54.497853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.686 qpair failed and we were unable to recover it. 00:29:06.686 [2024-07-24 20:08:54.498279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.686 [2024-07-24 20:08:54.498286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.686 qpair failed and we were unable to recover it. 00:29:06.686 [2024-07-24 20:08:54.498689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.686 [2024-07-24 20:08:54.498696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.686 qpair failed and we were unable to recover it. 00:29:06.686 [2024-07-24 20:08:54.499104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.686 [2024-07-24 20:08:54.499110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.686 qpair failed and we were unable to recover it. 00:29:06.686 [2024-07-24 20:08:54.499610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.686 [2024-07-24 20:08:54.499617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.686 qpair failed and we were unable to recover it. 00:29:06.686 [2024-07-24 20:08:54.499930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.686 [2024-07-24 20:08:54.499937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.686 qpair failed and we were unable to recover it. 00:29:06.686 [2024-07-24 20:08:54.500352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.686 [2024-07-24 20:08:54.500359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.686 qpair failed and we were unable to recover it. 00:29:06.686 [2024-07-24 20:08:54.500807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.686 [2024-07-24 20:08:54.500813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.686 qpair failed and we were unable to recover it. 00:29:06.686 [2024-07-24 20:08:54.501019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.686 [2024-07-24 20:08:54.501030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.686 qpair failed and we were unable to recover it. 00:29:06.686 [2024-07-24 20:08:54.501216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.686 [2024-07-24 20:08:54.501224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.686 qpair failed and we were unable to recover it. 00:29:06.686 [2024-07-24 20:08:54.501680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.686 [2024-07-24 20:08:54.501686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.686 qpair failed and we were unable to recover it. 00:29:06.686 [2024-07-24 20:08:54.502136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.686 [2024-07-24 20:08:54.502142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.686 qpair failed and we were unable to recover it. 00:29:06.686 [2024-07-24 20:08:54.502338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.686 [2024-07-24 20:08:54.502345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.686 qpair failed and we were unable to recover it. 00:29:06.686 [2024-07-24 20:08:54.502805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.686 [2024-07-24 20:08:54.502812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.686 qpair failed and we were unable to recover it. 00:29:06.686 [2024-07-24 20:08:54.503213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.686 [2024-07-24 20:08:54.503220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.686 qpair failed and we were unable to recover it. 00:29:06.686 [2024-07-24 20:08:54.503693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.686 [2024-07-24 20:08:54.503699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.686 qpair failed and we were unable to recover it. 00:29:06.686 [2024-07-24 20:08:54.504105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.686 [2024-07-24 20:08:54.504112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.686 qpair failed and we were unable to recover it. 00:29:06.686 [2024-07-24 20:08:54.504550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.686 [2024-07-24 20:08:54.504557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.686 qpair failed and we were unable to recover it. 00:29:06.686 [2024-07-24 20:08:54.504836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.686 [2024-07-24 20:08:54.504844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.686 qpair failed and we were unable to recover it. 00:29:06.686 [2024-07-24 20:08:54.505273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.686 [2024-07-24 20:08:54.505282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.686 qpair failed and we were unable to recover it. 00:29:06.686 [2024-07-24 20:08:54.505705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.686 [2024-07-24 20:08:54.505711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.686 qpair failed and we were unable to recover it. 00:29:06.686 [2024-07-24 20:08:54.506219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.686 [2024-07-24 20:08:54.506226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.686 qpair failed and we were unable to recover it. 00:29:06.686 [2024-07-24 20:08:54.506623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.686 [2024-07-24 20:08:54.506630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.687 qpair failed and we were unable to recover it. 00:29:06.687 [2024-07-24 20:08:54.507075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.687 [2024-07-24 20:08:54.507082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.687 qpair failed and we were unable to recover it. 00:29:06.687 [2024-07-24 20:08:54.507517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.687 [2024-07-24 20:08:54.507524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.687 qpair failed and we were unable to recover it. 00:29:06.687 [2024-07-24 20:08:54.507923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.687 [2024-07-24 20:08:54.507929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.687 qpair failed and we were unable to recover it. 00:29:06.687 [2024-07-24 20:08:54.508132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.687 [2024-07-24 20:08:54.508141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.687 qpair failed and we were unable to recover it. 00:29:06.687 [2024-07-24 20:08:54.508541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.687 [2024-07-24 20:08:54.508548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.687 qpair failed and we were unable to recover it. 00:29:06.687 [2024-07-24 20:08:54.508946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.687 [2024-07-24 20:08:54.508953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.687 qpair failed and we were unable to recover it. 00:29:06.687 [2024-07-24 20:08:54.509381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.687 [2024-07-24 20:08:54.509387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.687 qpair failed and we were unable to recover it. 00:29:06.687 [2024-07-24 20:08:54.509788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.687 [2024-07-24 20:08:54.509795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.687 qpair failed and we were unable to recover it. 00:29:06.687 [2024-07-24 20:08:54.510205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.687 [2024-07-24 20:08:54.510212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.687 qpair failed and we were unable to recover it. 00:29:06.687 [2024-07-24 20:08:54.510627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.687 [2024-07-24 20:08:54.510634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.687 qpair failed and we were unable to recover it. 00:29:06.687 [2024-07-24 20:08:54.511064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.687 [2024-07-24 20:08:54.511072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.687 qpair failed and we were unable to recover it. 00:29:06.687 [2024-07-24 20:08:54.511581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.687 [2024-07-24 20:08:54.511608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.687 qpair failed and we were unable to recover it. 00:29:06.687 [2024-07-24 20:08:54.512052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.687 [2024-07-24 20:08:54.512061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.687 qpair failed and we were unable to recover it. 00:29:06.687 [2024-07-24 20:08:54.512580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.687 [2024-07-24 20:08:54.512607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.687 qpair failed and we were unable to recover it. 00:29:06.687 [2024-07-24 20:08:54.512933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.687 [2024-07-24 20:08:54.512941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.687 qpair failed and we were unable to recover it. 00:29:06.687 [2024-07-24 20:08:54.513516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.687 [2024-07-24 20:08:54.513543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.687 qpair failed and we were unable to recover it. 00:29:06.687 [2024-07-24 20:08:54.513969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.687 [2024-07-24 20:08:54.513977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.687 qpair failed and we were unable to recover it. 00:29:06.687 [2024-07-24 20:08:54.514521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.687 [2024-07-24 20:08:54.514549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.687 qpair failed and we were unable to recover it. 00:29:06.687 [2024-07-24 20:08:54.514966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.687 [2024-07-24 20:08:54.514975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.687 qpair failed and we were unable to recover it. 00:29:06.687 [2024-07-24 20:08:54.515505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.687 [2024-07-24 20:08:54.515532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.687 qpair failed and we were unable to recover it. 00:29:06.687 [2024-07-24 20:08:54.515860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.687 [2024-07-24 20:08:54.515869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.687 qpair failed and we were unable to recover it. 00:29:06.687 [2024-07-24 20:08:54.516182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.687 [2024-07-24 20:08:54.516189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.687 qpair failed and we were unable to recover it. 00:29:06.687 [2024-07-24 20:08:54.516633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.687 [2024-07-24 20:08:54.516640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.687 qpair failed and we were unable to recover it. 00:29:06.687 [2024-07-24 20:08:54.517052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.687 [2024-07-24 20:08:54.517059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.687 qpair failed and we were unable to recover it. 00:29:06.687 [2024-07-24 20:08:54.517591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.687 [2024-07-24 20:08:54.517619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.687 qpair failed and we were unable to recover it. 00:29:06.687 [2024-07-24 20:08:54.518036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.687 [2024-07-24 20:08:54.518045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.687 qpair failed and we were unable to recover it. 00:29:06.687 [2024-07-24 20:08:54.518563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.687 [2024-07-24 20:08:54.518591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.687 qpair failed and we were unable to recover it. 00:29:06.687 [2024-07-24 20:08:54.519018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.687 [2024-07-24 20:08:54.519026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.687 qpair failed and we were unable to recover it. 00:29:06.687 [2024-07-24 20:08:54.519545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.687 [2024-07-24 20:08:54.519572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.687 qpair failed and we were unable to recover it. 00:29:06.687 [2024-07-24 20:08:54.519991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.687 [2024-07-24 20:08:54.519999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.687 qpair failed and we were unable to recover it. 00:29:06.687 [2024-07-24 20:08:54.520535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.687 [2024-07-24 20:08:54.520563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.687 qpair failed and we were unable to recover it. 00:29:06.687 [2024-07-24 20:08:54.520982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.687 [2024-07-24 20:08:54.520990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.687 qpair failed and we were unable to recover it. 00:29:06.687 [2024-07-24 20:08:54.521395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.687 [2024-07-24 20:08:54.521403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.687 qpair failed and we were unable to recover it. 00:29:06.687 [2024-07-24 20:08:54.521830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.687 [2024-07-24 20:08:54.521837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.687 qpair failed and we were unable to recover it. 00:29:06.687 [2024-07-24 20:08:54.522389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.687 [2024-07-24 20:08:54.522416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.687 qpair failed and we were unable to recover it. 00:29:06.687 [2024-07-24 20:08:54.522834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.687 [2024-07-24 20:08:54.522843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.687 qpair failed and we were unable to recover it. 00:29:06.687 [2024-07-24 20:08:54.523175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.687 [2024-07-24 20:08:54.523186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.687 qpair failed and we were unable to recover it. 00:29:06.688 [2024-07-24 20:08:54.523496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.688 [2024-07-24 20:08:54.523504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.688 qpair failed and we were unable to recover it. 00:29:06.688 [2024-07-24 20:08:54.523920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.688 [2024-07-24 20:08:54.523928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.688 qpair failed and we were unable to recover it. 00:29:06.688 [2024-07-24 20:08:54.524395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.688 [2024-07-24 20:08:54.524422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.688 qpair failed and we were unable to recover it. 00:29:06.688 [2024-07-24 20:08:54.524869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.688 [2024-07-24 20:08:54.524878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.688 qpair failed and we were unable to recover it. 00:29:06.688 [2024-07-24 20:08:54.525188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.688 [2024-07-24 20:08:54.525196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.688 qpair failed and we were unable to recover it. 00:29:06.688 [2024-07-24 20:08:54.525663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.688 [2024-07-24 20:08:54.525670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.688 qpair failed and we were unable to recover it. 00:29:06.688 [2024-07-24 20:08:54.526108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.688 [2024-07-24 20:08:54.526114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.688 qpair failed and we were unable to recover it. 00:29:06.688 [2024-07-24 20:08:54.526534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.688 [2024-07-24 20:08:54.526562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.688 qpair failed and we were unable to recover it. 00:29:06.688 [2024-07-24 20:08:54.527035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.688 [2024-07-24 20:08:54.527043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.688 qpair failed and we were unable to recover it. 00:29:06.688 [2024-07-24 20:08:54.527540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.688 [2024-07-24 20:08:54.527567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.688 qpair failed and we were unable to recover it. 00:29:06.688 [2024-07-24 20:08:54.527984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.688 [2024-07-24 20:08:54.527993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.688 qpair failed and we were unable to recover it. 00:29:06.688 [2024-07-24 20:08:54.528523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.688 [2024-07-24 20:08:54.528551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.688 qpair failed and we were unable to recover it. 00:29:06.688 [2024-07-24 20:08:54.528979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.688 [2024-07-24 20:08:54.528987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.688 qpair failed and we were unable to recover it. 00:29:06.688 [2024-07-24 20:08:54.529428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.688 [2024-07-24 20:08:54.529456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.688 qpair failed and we were unable to recover it. 00:29:06.688 [2024-07-24 20:08:54.529792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.688 [2024-07-24 20:08:54.529801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.688 qpair failed and we were unable to recover it. 00:29:06.688 [2024-07-24 20:08:54.530284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.688 [2024-07-24 20:08:54.530292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.688 qpair failed and we were unable to recover it. 00:29:06.688 [2024-07-24 20:08:54.530718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.688 [2024-07-24 20:08:54.530724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.688 qpair failed and we were unable to recover it. 00:29:06.688 [2024-07-24 20:08:54.531128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.688 [2024-07-24 20:08:54.531135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.688 qpair failed and we were unable to recover it. 00:29:06.688 [2024-07-24 20:08:54.531535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.688 [2024-07-24 20:08:54.531542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.688 qpair failed and we were unable to recover it. 00:29:06.688 [2024-07-24 20:08:54.531982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.688 [2024-07-24 20:08:54.531989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.688 qpair failed and we were unable to recover it. 00:29:06.688 [2024-07-24 20:08:54.532430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.688 [2024-07-24 20:08:54.532436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.688 qpair failed and we were unable to recover it. 00:29:06.688 [2024-07-24 20:08:54.532835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.688 [2024-07-24 20:08:54.532842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.688 qpair failed and we were unable to recover it. 00:29:06.688 [2024-07-24 20:08:54.533237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.688 [2024-07-24 20:08:54.533244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.688 qpair failed and we were unable to recover it. 00:29:06.688 [2024-07-24 20:08:54.533656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.688 [2024-07-24 20:08:54.533662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.688 qpair failed and we were unable to recover it. 00:29:06.688 [2024-07-24 20:08:54.534073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.688 [2024-07-24 20:08:54.534081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.688 qpair failed and we were unable to recover it. 00:29:06.688 [2024-07-24 20:08:54.534521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.688 [2024-07-24 20:08:54.534528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.688 qpair failed and we were unable to recover it. 00:29:06.688 [2024-07-24 20:08:54.534931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.688 [2024-07-24 20:08:54.534941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.688 qpair failed and we were unable to recover it. 00:29:06.688 [2024-07-24 20:08:54.535441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.688 [2024-07-24 20:08:54.535468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.688 qpair failed and we were unable to recover it. 00:29:06.688 [2024-07-24 20:08:54.535973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.688 [2024-07-24 20:08:54.535981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.688 qpair failed and we were unable to recover it. 00:29:06.688 [2024-07-24 20:08:54.536509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.688 [2024-07-24 20:08:54.536537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.688 qpair failed and we were unable to recover it. 00:29:06.688 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3861560 Killed "${NVMF_APP[@]}" "$@" 00:29:06.688 [2024-07-24 20:08:54.537026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.688 [2024-07-24 20:08:54.537035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.688 qpair failed and we were unable to recover it. 00:29:06.688 20:08:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:06.688 [2024-07-24 20:08:54.537447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.688 [2024-07-24 20:08:54.537474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.688 qpair failed and we were unable to recover it. 00:29:06.688 20:08:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:06.688 [2024-07-24 20:08:54.537897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.688 [2024-07-24 20:08:54.537906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.688 qpair failed and we were unable to recover it. 00:29:06.688 20:08:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:06.688 20:08:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:06.688 [2024-07-24 20:08:54.538471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.688 20:08:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.689 [2024-07-24 20:08:54.538499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.689 qpair failed and we were unable to recover it. 00:29:06.689 [2024-07-24 20:08:54.538979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.689 [2024-07-24 20:08:54.538988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.689 qpair failed and we were unable to recover it. 00:29:06.689 [2024-07-24 20:08:54.539509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.689 [2024-07-24 20:08:54.539537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.689 qpair failed and we were unable to recover it. 00:29:06.689 [2024-07-24 20:08:54.539997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.689 [2024-07-24 20:08:54.540006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.689 qpair failed and we were unable to recover it. 00:29:06.689 [2024-07-24 20:08:54.540523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.689 [2024-07-24 20:08:54.540550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.689 qpair failed and we were unable to recover it. 00:29:06.689 [2024-07-24 20:08:54.540968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.689 [2024-07-24 20:08:54.540977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.689 qpair failed and we were unable to recover it. 00:29:06.689 [2024-07-24 20:08:54.541518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.689 [2024-07-24 20:08:54.541546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.689 qpair failed and we were unable to recover it. 00:29:06.689 [2024-07-24 20:08:54.541958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.689 [2024-07-24 20:08:54.541967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.689 qpair failed and we were unable to recover it. 00:29:06.689 [2024-07-24 20:08:54.542477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.689 [2024-07-24 20:08:54.542505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.689 qpair failed and we were unable to recover it. 00:29:06.689 [2024-07-24 20:08:54.542944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.689 [2024-07-24 20:08:54.542954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.689 qpair failed and we were unable to recover it. 00:29:06.689 [2024-07-24 20:08:54.543508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.689 [2024-07-24 20:08:54.543542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.689 qpair failed and we were unable to recover it. 00:29:06.689 [2024-07-24 20:08:54.543981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.689 [2024-07-24 20:08:54.543990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.689 qpair failed and we were unable to recover it. 00:29:06.689 [2024-07-24 20:08:54.544554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.689 [2024-07-24 20:08:54.544583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.689 qpair failed and we were unable to recover it. 00:29:06.689 [2024-07-24 20:08:54.545019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.689 [2024-07-24 20:08:54.545029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.689 qpair failed and we were unable to recover it. 00:29:06.689 [2024-07-24 20:08:54.545593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.689 [2024-07-24 20:08:54.545622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.689 qpair failed and we were unable to recover it. 00:29:06.689 20:08:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3862490 00:29:06.689 [2024-07-24 20:08:54.546077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.689 [2024-07-24 20:08:54.546091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.689 qpair failed and we were unable to recover it. 00:29:06.689 20:08:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3862490 00:29:06.689 [2024-07-24 20:08:54.546543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.689 [2024-07-24 20:08:54.546557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.689 20:08:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3862490 ']' 00:29:06.689 qpair failed and we were unable to recover it. 00:29:06.689 20:08:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:06.689 20:08:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:06.689 [2024-07-24 20:08:54.546882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.689 [2024-07-24 20:08:54.546892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.689 qpair failed and we were unable to recover it. 00:29:06.689 20:08:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:06.689 20:08:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:06.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:06.689 [2024-07-24 20:08:54.547422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.689 [2024-07-24 20:08:54.547451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.689 qpair failed and we were unable to recover it. 00:29:06.689 20:08:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:06.689 20:08:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.689 [2024-07-24 20:08:54.547898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.689 [2024-07-24 20:08:54.547909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.689 qpair failed and we were unable to recover it. 00:29:06.689 [2024-07-24 20:08:54.548243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.689 [2024-07-24 20:08:54.548252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.689 qpair failed and we were unable to recover it. 00:29:06.689 [2024-07-24 20:08:54.548690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.689 [2024-07-24 20:08:54.548698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.689 qpair failed and we were unable to recover it. 00:29:06.689 [2024-07-24 20:08:54.549124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.689 [2024-07-24 20:08:54.549132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.689 qpair failed and we were unable to recover it. 00:29:06.689 [2024-07-24 20:08:54.549573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.689 [2024-07-24 20:08:54.549581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.689 qpair failed and we were unable to recover it. 00:29:06.689 [2024-07-24 20:08:54.549910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.689 [2024-07-24 20:08:54.549919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.689 qpair failed and we were unable to recover it. 00:29:06.689 [2024-07-24 20:08:54.550346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.689 [2024-07-24 20:08:54.550356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.689 qpair failed and we were unable to recover it. 00:29:06.689 [2024-07-24 20:08:54.550782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.690 [2024-07-24 20:08:54.550790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.690 qpair failed and we were unable to recover it. 00:29:06.690 [2024-07-24 20:08:54.551235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.690 [2024-07-24 20:08:54.551242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.690 qpair failed and we were unable to recover it. 00:29:06.690 [2024-07-24 20:08:54.551679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.690 [2024-07-24 20:08:54.551687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.690 qpair failed and we were unable to recover it. 00:29:06.690 [2024-07-24 20:08:54.552157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.690 [2024-07-24 20:08:54.552164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.690 qpair failed and we were unable to recover it. 00:29:06.690 [2024-07-24 20:08:54.552357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.690 [2024-07-24 20:08:54.552367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.690 qpair failed and we were unable to recover it. 00:29:06.690 [2024-07-24 20:08:54.552841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.690 [2024-07-24 20:08:54.552848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.690 qpair failed and we were unable to recover it. 00:29:06.690 [2024-07-24 20:08:54.553256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.690 [2024-07-24 20:08:54.553264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.690 qpair failed and we were unable to recover it. 00:29:06.690 [2024-07-24 20:08:54.553691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.690 [2024-07-24 20:08:54.553699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.690 qpair failed and we were unable to recover it. 00:29:06.690 [2024-07-24 20:08:54.554099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.690 [2024-07-24 20:08:54.554107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.690 qpair failed and we were unable to recover it. 00:29:06.690 [2024-07-24 20:08:54.554574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.690 [2024-07-24 20:08:54.554582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.690 qpair failed and we were unable to recover it. 00:29:06.690 [2024-07-24 20:08:54.554876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.690 [2024-07-24 20:08:54.554884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.690 qpair failed and we were unable to recover it. 00:29:06.690 [2024-07-24 20:08:54.555305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.690 [2024-07-24 20:08:54.555313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.690 qpair failed and we were unable to recover it. 00:29:06.690 [2024-07-24 20:08:54.555484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.690 [2024-07-24 20:08:54.555493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.690 qpair failed and we were unable to recover it. 00:29:06.690 [2024-07-24 20:08:54.555971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.690 [2024-07-24 20:08:54.555979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.690 qpair failed and we were unable to recover it. 00:29:06.690 [2024-07-24 20:08:54.556395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.690 [2024-07-24 20:08:54.556402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.690 qpair failed and we were unable to recover it. 00:29:06.690 [2024-07-24 20:08:54.556898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.690 [2024-07-24 20:08:54.556905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.690 qpair failed and we were unable to recover it. 00:29:06.690 [2024-07-24 20:08:54.557347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.690 [2024-07-24 20:08:54.557354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.690 qpair failed and we were unable to recover it. 00:29:06.690 [2024-07-24 20:08:54.557794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.690 [2024-07-24 20:08:54.557801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.690 qpair failed and we were unable to recover it. 00:29:06.690 [2024-07-24 20:08:54.558236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.690 [2024-07-24 20:08:54.558244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.690 qpair failed and we were unable to recover it. 00:29:06.690 [2024-07-24 20:08:54.558707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.690 [2024-07-24 20:08:54.558714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.690 qpair failed and we were unable to recover it. 00:29:06.690 [2024-07-24 20:08:54.559129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.690 [2024-07-24 20:08:54.559136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.690 qpair failed and we were unable to recover it. 00:29:06.690 [2024-07-24 20:08:54.559544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.690 [2024-07-24 20:08:54.559551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.690 qpair failed and we were unable to recover it. 00:29:06.690 [2024-07-24 20:08:54.559979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.690 [2024-07-24 20:08:54.559986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.690 qpair failed and we were unable to recover it. 00:29:06.690 [2024-07-24 20:08:54.560423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.690 [2024-07-24 20:08:54.560430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.690 qpair failed and we were unable to recover it. 00:29:06.690 [2024-07-24 20:08:54.560846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.690 [2024-07-24 20:08:54.560854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.690 qpair failed and we were unable to recover it. 00:29:06.690 [2024-07-24 20:08:54.561299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.690 [2024-07-24 20:08:54.561306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.690 qpair failed and we were unable to recover it. 00:29:06.690 [2024-07-24 20:08:54.561626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.690 [2024-07-24 20:08:54.561634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.690 qpair failed and we were unable to recover it. 00:29:06.690 [2024-07-24 20:08:54.562062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.690 [2024-07-24 20:08:54.562069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.690 qpair failed and we were unable to recover it. 00:29:06.690 [2024-07-24 20:08:54.562542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.690 [2024-07-24 20:08:54.562549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.690 qpair failed and we were unable to recover it. 00:29:06.690 [2024-07-24 20:08:54.562981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.690 [2024-07-24 20:08:54.562988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.690 qpair failed and we were unable to recover it. 00:29:06.690 [2024-07-24 20:08:54.563176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.690 [2024-07-24 20:08:54.563184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.690 qpair failed and we were unable to recover it. 00:29:06.690 [2024-07-24 20:08:54.563672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.690 [2024-07-24 20:08:54.563679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.690 qpair failed and we were unable to recover it. 00:29:06.690 [2024-07-24 20:08:54.564164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.690 [2024-07-24 20:08:54.564173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.690 qpair failed and we were unable to recover it. 00:29:06.690 [2024-07-24 20:08:54.564706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.690 [2024-07-24 20:08:54.564734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.690 qpair failed and we were unable to recover it. 00:29:06.690 [2024-07-24 20:08:54.565431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.690 [2024-07-24 20:08:54.565459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.690 qpair failed and we were unable to recover it. 00:29:06.690 [2024-07-24 20:08:54.565879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.691 [2024-07-24 20:08:54.565887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.691 qpair failed and we were unable to recover it. 00:29:06.691 [2024-07-24 20:08:54.566490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.691 [2024-07-24 20:08:54.566518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.691 qpair failed and we were unable to recover it. 00:29:06.691 [2024-07-24 20:08:54.566941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.691 [2024-07-24 20:08:54.566950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.691 qpair failed and we were unable to recover it. 00:29:06.691 [2024-07-24 20:08:54.567459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.691 [2024-07-24 20:08:54.567486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.691 qpair failed and we were unable to recover it. 00:29:06.691 [2024-07-24 20:08:54.567915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.691 [2024-07-24 20:08:54.567924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.691 qpair failed and we were unable to recover it. 00:29:06.691 [2024-07-24 20:08:54.568443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.691 [2024-07-24 20:08:54.568471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.691 qpair failed and we were unable to recover it. 00:29:06.691 [2024-07-24 20:08:54.568937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.691 [2024-07-24 20:08:54.568946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.691 qpair failed and we were unable to recover it. 00:29:06.691 [2024-07-24 20:08:54.569416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.691 [2024-07-24 20:08:54.569443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.691 qpair failed and we were unable to recover it. 00:29:06.691 [2024-07-24 20:08:54.569787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.691 [2024-07-24 20:08:54.569797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.691 qpair failed and we were unable to recover it. 00:29:06.691 [2024-07-24 20:08:54.570118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.691 [2024-07-24 20:08:54.570125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.691 qpair failed and we were unable to recover it. 00:29:06.691 [2024-07-24 20:08:54.570585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.691 [2024-07-24 20:08:54.570593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.691 qpair failed and we were unable to recover it. 00:29:06.691 [2024-07-24 20:08:54.571031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.691 [2024-07-24 20:08:54.571038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.691 qpair failed and we were unable to recover it. 00:29:06.691 [2024-07-24 20:08:54.571600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.691 [2024-07-24 20:08:54.571628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.691 qpair failed and we were unable to recover it. 00:29:06.691 [2024-07-24 20:08:54.572027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.691 [2024-07-24 20:08:54.572036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.691 qpair failed and we were unable to recover it. 00:29:06.691 [2024-07-24 20:08:54.572492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.691 [2024-07-24 20:08:54.572519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.691 qpair failed and we were unable to recover it. 00:29:06.691 [2024-07-24 20:08:54.572956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.691 [2024-07-24 20:08:54.572965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.691 qpair failed and we were unable to recover it. 00:29:06.691 [2024-07-24 20:08:54.573513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.691 [2024-07-24 20:08:54.573542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.691 qpair failed and we were unable to recover it. 00:29:06.691 [2024-07-24 20:08:54.573786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.691 [2024-07-24 20:08:54.573794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.691 qpair failed and we were unable to recover it. 00:29:06.691 [2024-07-24 20:08:54.574232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.691 [2024-07-24 20:08:54.574241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.691 qpair failed and we were unable to recover it. 00:29:06.691 [2024-07-24 20:08:54.574644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.691 [2024-07-24 20:08:54.574653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.691 qpair failed and we were unable to recover it. 00:29:06.691 [2024-07-24 20:08:54.575097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.691 [2024-07-24 20:08:54.575105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.691 qpair failed and we were unable to recover it. 00:29:06.691 [2024-07-24 20:08:54.575515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.691 [2024-07-24 20:08:54.575523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.691 qpair failed and we were unable to recover it. 00:29:06.691 [2024-07-24 20:08:54.575961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.691 [2024-07-24 20:08:54.575968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.691 qpair failed and we were unable to recover it. 00:29:06.691 [2024-07-24 20:08:54.576433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.691 [2024-07-24 20:08:54.576441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.691 qpair failed and we were unable to recover it. 00:29:06.691 [2024-07-24 20:08:54.576888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.691 [2024-07-24 20:08:54.576896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.691 qpair failed and we were unable to recover it. 00:29:06.691 [2024-07-24 20:08:54.577304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.691 [2024-07-24 20:08:54.577311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.691 qpair failed and we were unable to recover it. 00:29:06.691 [2024-07-24 20:08:54.577724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.691 [2024-07-24 20:08:54.577732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.691 qpair failed and we were unable to recover it. 00:29:06.691 [2024-07-24 20:08:54.578157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.691 [2024-07-24 20:08:54.578165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.691 qpair failed and we were unable to recover it. 00:29:06.691 [2024-07-24 20:08:54.578580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.691 [2024-07-24 20:08:54.578587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.691 qpair failed and we were unable to recover it. 00:29:06.691 [2024-07-24 20:08:54.578876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.691 [2024-07-24 20:08:54.578889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.691 qpair failed and we were unable to recover it. 00:29:06.691 [2024-07-24 20:08:54.579327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.691 [2024-07-24 20:08:54.579335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.691 qpair failed and we were unable to recover it. 00:29:06.691 [2024-07-24 20:08:54.579752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.691 [2024-07-24 20:08:54.579759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.691 qpair failed and we were unable to recover it. 00:29:06.691 [2024-07-24 20:08:54.580207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.691 [2024-07-24 20:08:54.580214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.691 qpair failed and we were unable to recover it. 00:29:06.691 [2024-07-24 20:08:54.580498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.691 [2024-07-24 20:08:54.580507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.691 qpair failed and we were unable to recover it. 00:29:06.691 [2024-07-24 20:08:54.580939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.691 [2024-07-24 20:08:54.580948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.691 qpair failed and we were unable to recover it. 00:29:06.691 [2024-07-24 20:08:54.581437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.691 [2024-07-24 20:08:54.581465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.691 qpair failed and we were unable to recover it. 00:29:06.691 [2024-07-24 20:08:54.581880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.691 [2024-07-24 20:08:54.581889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.691 qpair failed and we were unable to recover it. 00:29:06.691 [2024-07-24 20:08:54.582299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.692 [2024-07-24 20:08:54.582307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.692 qpair failed and we were unable to recover it. 00:29:06.692 [2024-07-24 20:08:54.582726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.692 [2024-07-24 20:08:54.582734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.692 qpair failed and we were unable to recover it. 00:29:06.692 [2024-07-24 20:08:54.583166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.692 [2024-07-24 20:08:54.583174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.692 qpair failed and we were unable to recover it. 00:29:06.692 [2024-07-24 20:08:54.583360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.692 [2024-07-24 20:08:54.583369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.692 qpair failed and we were unable to recover it. 00:29:06.692 [2024-07-24 20:08:54.583698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.692 [2024-07-24 20:08:54.583705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.692 qpair failed and we were unable to recover it. 00:29:06.692 [2024-07-24 20:08:54.584042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.692 [2024-07-24 20:08:54.584049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.692 qpair failed and we were unable to recover it. 00:29:06.692 [2024-07-24 20:08:54.584489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.692 [2024-07-24 20:08:54.584497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.692 qpair failed and we were unable to recover it. 00:29:06.692 [2024-07-24 20:08:54.584845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.692 [2024-07-24 20:08:54.584853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.692 qpair failed and we were unable to recover it. 00:29:06.692 [2024-07-24 20:08:54.585288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.692 [2024-07-24 20:08:54.585296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.692 qpair failed and we were unable to recover it. 00:29:06.692 [2024-07-24 20:08:54.585740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.692 [2024-07-24 20:08:54.585747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.692 qpair failed and we were unable to recover it. 00:29:06.692 [2024-07-24 20:08:54.586194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.692 [2024-07-24 20:08:54.586205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.692 qpair failed and we were unable to recover it. 00:29:06.692 [2024-07-24 20:08:54.586621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.692 [2024-07-24 20:08:54.586628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.692 qpair failed and we were unable to recover it. 00:29:06.692 [2024-07-24 20:08:54.587032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.692 [2024-07-24 20:08:54.587039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.692 qpair failed and we were unable to recover it. 00:29:06.692 [2024-07-24 20:08:54.587461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.692 [2024-07-24 20:08:54.587490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.692 qpair failed and we were unable to recover it. 00:29:06.692 [2024-07-24 20:08:54.587821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.692 [2024-07-24 20:08:54.587830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.692 qpair failed and we were unable to recover it. 00:29:06.692 [2024-07-24 20:08:54.588159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.692 [2024-07-24 20:08:54.588167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.692 qpair failed and we were unable to recover it. 00:29:06.692 [2024-07-24 20:08:54.588642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.692 [2024-07-24 20:08:54.588649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.692 qpair failed and we were unable to recover it. 00:29:06.692 [2024-07-24 20:08:54.589074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.692 [2024-07-24 20:08:54.589081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.692 qpair failed and we were unable to recover it. 00:29:06.692 [2024-07-24 20:08:54.589593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.692 [2024-07-24 20:08:54.589621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.692 qpair failed and we were unable to recover it. 00:29:06.692 [2024-07-24 20:08:54.590049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.692 [2024-07-24 20:08:54.590057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.692 qpair failed and we were unable to recover it. 00:29:06.692 [2024-07-24 20:08:54.590568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.692 [2024-07-24 20:08:54.590596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.692 qpair failed and we were unable to recover it. 00:29:06.692 [2024-07-24 20:08:54.591010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.692 [2024-07-24 20:08:54.591022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.692 qpair failed and we were unable to recover it. 00:29:06.692 [2024-07-24 20:08:54.591601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.692 [2024-07-24 20:08:54.591629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.692 qpair failed and we were unable to recover it. 00:29:06.692 [2024-07-24 20:08:54.592049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.692 [2024-07-24 20:08:54.592057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.692 qpair failed and we were unable to recover it. 00:29:06.692 [2024-07-24 20:08:54.592484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.692 [2024-07-24 20:08:54.592511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.692 qpair failed and we were unable to recover it. 00:29:06.692 [2024-07-24 20:08:54.592922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.692 [2024-07-24 20:08:54.592931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.692 qpair failed and we were unable to recover it. 00:29:06.692 [2024-07-24 20:08:54.593424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.692 [2024-07-24 20:08:54.593452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.692 qpair failed and we were unable to recover it. 00:29:06.692 [2024-07-24 20:08:54.593925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.692 [2024-07-24 20:08:54.593934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.692 qpair failed and we were unable to recover it. 00:29:06.692 [2024-07-24 20:08:54.594224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.692 [2024-07-24 20:08:54.594239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.692 qpair failed and we were unable to recover it. 00:29:06.692 [2024-07-24 20:08:54.594672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.692 [2024-07-24 20:08:54.594680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.692 qpair failed and we were unable to recover it. 00:29:06.692 [2024-07-24 20:08:54.595104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.692 [2024-07-24 20:08:54.595110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.692 qpair failed and we were unable to recover it. 00:29:06.692 [2024-07-24 20:08:54.595432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.692 [2024-07-24 20:08:54.595440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.692 qpair failed and we were unable to recover it. 00:29:06.692 [2024-07-24 20:08:54.595884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.692 [2024-07-24 20:08:54.595891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.692 qpair failed and we were unable to recover it. 00:29:06.692 [2024-07-24 20:08:54.596313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.692 [2024-07-24 20:08:54.596320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.692 qpair failed and we were unable to recover it. 00:29:06.692 [2024-07-24 20:08:54.596749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.692 [2024-07-24 20:08:54.596757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.692 qpair failed and we were unable to recover it. 00:29:06.692 [2024-07-24 20:08:54.597185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.692 [2024-07-24 20:08:54.597192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.692 qpair failed and we were unable to recover it. 00:29:06.692 [2024-07-24 20:08:54.597608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.692 [2024-07-24 20:08:54.597616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.692 qpair failed and we were unable to recover it. 00:29:06.692 [2024-07-24 20:08:54.598087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.693 [2024-07-24 20:08:54.598094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.693 qpair failed and we were unable to recover it. 00:29:06.693 [2024-07-24 20:08:54.598185] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:29:06.693 [2024-07-24 20:08:54.598249] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:06.693 [2024-07-24 20:08:54.598403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.693 [2024-07-24 20:08:54.598411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.693 qpair failed and we were unable to recover it. 00:29:06.693 [2024-07-24 20:08:54.598853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.693 [2024-07-24 20:08:54.598859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.693 qpair failed and we were unable to recover it. 00:29:06.693 [2024-07-24 20:08:54.599075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.693 [2024-07-24 20:08:54.599085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.693 qpair failed and we were unable to recover it. 00:29:06.693 [2024-07-24 20:08:54.599400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.693 [2024-07-24 20:08:54.599409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.693 qpair failed and we were unable to recover it. 00:29:06.693 [2024-07-24 20:08:54.599848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.693 [2024-07-24 20:08:54.599856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.693 qpair failed and we were unable to recover it. 00:29:06.693 [2024-07-24 20:08:54.600356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.693 [2024-07-24 20:08:54.600363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.693 qpair failed and we were unable to recover it. 00:29:06.693 [2024-07-24 20:08:54.600558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.693 [2024-07-24 20:08:54.600567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.693 qpair failed and we were unable to recover it. 00:29:06.693 [2024-07-24 20:08:54.600993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.693 [2024-07-24 20:08:54.601001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.693 qpair failed and we were unable to recover it. 00:29:06.693 [2024-07-24 20:08:54.601434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.693 [2024-07-24 20:08:54.601442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.693 qpair failed and we were unable to recover it. 00:29:06.693 [2024-07-24 20:08:54.601870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.693 [2024-07-24 20:08:54.601878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.693 qpair failed and we were unable to recover it. 00:29:06.693 [2024-07-24 20:08:54.602356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.693 [2024-07-24 20:08:54.602363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.693 qpair failed and we were unable to recover it. 00:29:06.693 [2024-07-24 20:08:54.602789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.693 [2024-07-24 20:08:54.602797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.693 qpair failed and we were unable to recover it. 00:29:06.693 [2024-07-24 20:08:54.603220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.693 [2024-07-24 20:08:54.603229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.693 qpair failed and we were unable to recover it. 00:29:06.693 [2024-07-24 20:08:54.603669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.693 [2024-07-24 20:08:54.603677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.693 qpair failed and we were unable to recover it. 00:29:06.693 [2024-07-24 20:08:54.604149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.693 [2024-07-24 20:08:54.604156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.693 qpair failed and we were unable to recover it. 00:29:06.693 [2024-07-24 20:08:54.604570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.693 [2024-07-24 20:08:54.604578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.693 qpair failed and we were unable to recover it. 00:29:06.693 [2024-07-24 20:08:54.604865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.693 [2024-07-24 20:08:54.604872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.693 qpair failed and we were unable to recover it. 00:29:06.693 [2024-07-24 20:08:54.605297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.693 [2024-07-24 20:08:54.605305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.693 qpair failed and we were unable to recover it. 00:29:06.693 [2024-07-24 20:08:54.605521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.693 [2024-07-24 20:08:54.605530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.693 qpair failed and we were unable to recover it. 00:29:06.693 [2024-07-24 20:08:54.605963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.693 [2024-07-24 20:08:54.605971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.693 qpair failed and we were unable to recover it. 00:29:06.693 [2024-07-24 20:08:54.606383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.693 [2024-07-24 20:08:54.606392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.693 qpair failed and we were unable to recover it. 00:29:06.693 [2024-07-24 20:08:54.606882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.693 [2024-07-24 20:08:54.606888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.693 qpair failed and we were unable to recover it. 00:29:06.693 [2024-07-24 20:08:54.607374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.693 [2024-07-24 20:08:54.607381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.693 qpair failed and we were unable to recover it. 00:29:06.693 [2024-07-24 20:08:54.607571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.693 [2024-07-24 20:08:54.607579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.693 qpair failed and we were unable to recover it. 00:29:06.693 [2024-07-24 20:08:54.608003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.693 [2024-07-24 20:08:54.608010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.693 qpair failed and we were unable to recover it. 00:29:06.693 [2024-07-24 20:08:54.608216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.693 [2024-07-24 20:08:54.608230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.693 qpair failed and we were unable to recover it. 00:29:06.693 [2024-07-24 20:08:54.608643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.693 [2024-07-24 20:08:54.608650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.693 qpair failed and we were unable to recover it. 00:29:06.693 [2024-07-24 20:08:54.609163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.693 [2024-07-24 20:08:54.609169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.693 qpair failed and we were unable to recover it. 00:29:06.693 [2024-07-24 20:08:54.609620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.693 [2024-07-24 20:08:54.609628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.693 qpair failed and we were unable to recover it. 00:29:06.693 [2024-07-24 20:08:54.610031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.693 [2024-07-24 20:08:54.610038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.693 qpair failed and we were unable to recover it. 00:29:06.693 [2024-07-24 20:08:54.610394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.693 [2024-07-24 20:08:54.610401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.693 qpair failed and we were unable to recover it. 00:29:06.693 [2024-07-24 20:08:54.610844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.693 [2024-07-24 20:08:54.610850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.693 qpair failed and we were unable to recover it. 00:29:06.693 [2024-07-24 20:08:54.611250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.693 [2024-07-24 20:08:54.611258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.694 qpair failed and we were unable to recover it. 00:29:06.990 [2024-07-24 20:08:54.611763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.990 [2024-07-24 20:08:54.611771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.990 qpair failed and we were unable to recover it. 00:29:06.990 [2024-07-24 20:08:54.612188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.990 [2024-07-24 20:08:54.612195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.990 qpair failed and we were unable to recover it. 00:29:06.990 [2024-07-24 20:08:54.612617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.990 [2024-07-24 20:08:54.612627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.990 qpair failed and we were unable to recover it. 00:29:06.990 [2024-07-24 20:08:54.613032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.990 [2024-07-24 20:08:54.613039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.990 qpair failed and we were unable to recover it. 00:29:06.990 [2024-07-24 20:08:54.613587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.990 [2024-07-24 20:08:54.613615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.990 qpair failed and we were unable to recover it. 00:29:06.990 [2024-07-24 20:08:54.614066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.990 [2024-07-24 20:08:54.614075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.990 qpair failed and we were unable to recover it. 00:29:06.990 [2024-07-24 20:08:54.614615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.990 [2024-07-24 20:08:54.614648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.990 qpair failed and we were unable to recover it. 00:29:06.990 [2024-07-24 20:08:54.615087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.990 [2024-07-24 20:08:54.615096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.990 qpair failed and we were unable to recover it. 00:29:06.990 [2024-07-24 20:08:54.615572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.990 [2024-07-24 20:08:54.615600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.990 qpair failed and we were unable to recover it. 00:29:06.990 [2024-07-24 20:08:54.616019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.990 [2024-07-24 20:08:54.616027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.990 qpair failed and we were unable to recover it. 00:29:06.990 [2024-07-24 20:08:54.616578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.990 [2024-07-24 20:08:54.616606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.990 qpair failed and we were unable to recover it. 00:29:06.990 [2024-07-24 20:08:54.617081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.990 [2024-07-24 20:08:54.617090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.990 qpair failed and we were unable to recover it. 00:29:06.990 [2024-07-24 20:08:54.617446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.990 [2024-07-24 20:08:54.617454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.990 qpair failed and we were unable to recover it. 00:29:06.990 [2024-07-24 20:08:54.617874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.990 [2024-07-24 20:08:54.617880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.990 qpair failed and we were unable to recover it. 00:29:06.990 [2024-07-24 20:08:54.618323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.990 [2024-07-24 20:08:54.618330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.990 qpair failed and we were unable to recover it. 00:29:06.990 [2024-07-24 20:08:54.618769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.990 [2024-07-24 20:08:54.618776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.990 qpair failed and we were unable to recover it. 00:29:06.990 [2024-07-24 20:08:54.619172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.990 [2024-07-24 20:08:54.619179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.990 qpair failed and we were unable to recover it. 00:29:06.990 [2024-07-24 20:08:54.619611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.990 [2024-07-24 20:08:54.619619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.990 qpair failed and we were unable to recover it. 00:29:06.990 [2024-07-24 20:08:54.620017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.990 [2024-07-24 20:08:54.620025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.990 qpair failed and we were unable to recover it. 00:29:06.990 [2024-07-24 20:08:54.620450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.990 [2024-07-24 20:08:54.620477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.990 qpair failed and we were unable to recover it. 00:29:06.990 [2024-07-24 20:08:54.620895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.990 [2024-07-24 20:08:54.620904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.990 qpair failed and we were unable to recover it. 00:29:06.990 [2024-07-24 20:08:54.621444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.990 [2024-07-24 20:08:54.621471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.990 qpair failed and we were unable to recover it. 00:29:06.990 [2024-07-24 20:08:54.621952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.990 [2024-07-24 20:08:54.621961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.990 qpair failed and we were unable to recover it. 00:29:06.990 [2024-07-24 20:08:54.622482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.991 [2024-07-24 20:08:54.622510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.991 qpair failed and we were unable to recover it. 00:29:06.991 [2024-07-24 20:08:54.622933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.991 [2024-07-24 20:08:54.622941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.991 qpair failed and we were unable to recover it. 00:29:06.991 [2024-07-24 20:08:54.623445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.991 [2024-07-24 20:08:54.623472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.991 qpair failed and we were unable to recover it. 00:29:06.991 [2024-07-24 20:08:54.623891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.991 [2024-07-24 20:08:54.623899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.991 qpair failed and we were unable to recover it. 00:29:06.991 [2024-07-24 20:08:54.624224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.991 [2024-07-24 20:08:54.624232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.991 qpair failed and we were unable to recover it. 00:29:06.991 [2024-07-24 20:08:54.624545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.991 [2024-07-24 20:08:54.624552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.991 qpair failed and we were unable to recover it. 00:29:06.991 [2024-07-24 20:08:54.624960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.991 [2024-07-24 20:08:54.624967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.991 qpair failed and we were unable to recover it. 00:29:06.991 [2024-07-24 20:08:54.625367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.991 [2024-07-24 20:08:54.625374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.991 qpair failed and we were unable to recover it. 00:29:06.991 [2024-07-24 20:08:54.625813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.991 [2024-07-24 20:08:54.625819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.991 qpair failed and we were unable to recover it. 00:29:06.991 [2024-07-24 20:08:54.626227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.991 [2024-07-24 20:08:54.626235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.991 qpair failed and we were unable to recover it. 00:29:06.991 [2024-07-24 20:08:54.626681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.991 [2024-07-24 20:08:54.626688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.991 qpair failed and we were unable to recover it. 00:29:06.991 [2024-07-24 20:08:54.626986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.991 [2024-07-24 20:08:54.626999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.991 qpair failed and we were unable to recover it. 00:29:06.991 [2024-07-24 20:08:54.627439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.991 [2024-07-24 20:08:54.627446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.991 qpair failed and we were unable to recover it. 00:29:06.991 [2024-07-24 20:08:54.627846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.991 [2024-07-24 20:08:54.627853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.991 qpair failed and we were unable to recover it. 00:29:06.991 [2024-07-24 20:08:54.628167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.991 [2024-07-24 20:08:54.628175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.991 qpair failed and we were unable to recover it. 00:29:06.991 [2024-07-24 20:08:54.628645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.991 [2024-07-24 20:08:54.628652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.991 qpair failed and we were unable to recover it. 00:29:06.991 [2024-07-24 20:08:54.628963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.991 [2024-07-24 20:08:54.628970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.991 qpair failed and we were unable to recover it. 00:29:06.991 [2024-07-24 20:08:54.629570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.991 [2024-07-24 20:08:54.629597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.991 qpair failed and we were unable to recover it. 00:29:06.991 EAL: No free 2048 kB hugepages reported on node 1 00:29:06.991 [2024-07-24 20:08:54.630029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.991 [2024-07-24 20:08:54.630038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.991 qpair failed and we were unable to recover it. 00:29:06.991 [2024-07-24 20:08:54.630567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.991 [2024-07-24 20:08:54.630599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.991 qpair failed and we were unable to recover it. 00:29:06.991 [2024-07-24 20:08:54.631018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.991 [2024-07-24 20:08:54.631026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.991 qpair failed and we were unable to recover it. 00:29:06.991 [2024-07-24 20:08:54.631544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.991 [2024-07-24 20:08:54.631571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.991 qpair failed and we were unable to recover it. 00:29:06.991 [2024-07-24 20:08:54.632017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.991 [2024-07-24 20:08:54.632026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.991 qpair failed and we were unable to recover it. 00:29:06.991 [2024-07-24 20:08:54.632542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.991 [2024-07-24 20:08:54.632569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.991 qpair failed and we were unable to recover it. 00:29:06.991 [2024-07-24 20:08:54.633034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.991 [2024-07-24 20:08:54.633043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.991 qpair failed and we were unable to recover it. 00:29:06.991 [2024-07-24 20:08:54.633529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.991 [2024-07-24 20:08:54.633556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.991 qpair failed and we were unable to recover it. 00:29:06.991 [2024-07-24 20:08:54.634047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.991 [2024-07-24 20:08:54.634057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.991 qpair failed and we were unable to recover it. 00:29:06.991 [2024-07-24 20:08:54.634559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.991 [2024-07-24 20:08:54.634587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.991 qpair failed and we were unable to recover it. 00:29:06.991 [2024-07-24 20:08:54.634837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.992 [2024-07-24 20:08:54.634846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3ec000b90 with addr=10.0.0.2, port=4420 00:29:06.992 qpair failed and we were unable to recover it. 00:29:06.992 Read completed with error (sct=0, sc=8) 00:29:06.992 starting I/O failed 00:29:06.992 Read completed with error (sct=0, sc=8) 00:29:06.992 starting I/O failed 00:29:06.992 Read completed with error (sct=0, sc=8) 00:29:06.992 starting I/O failed 00:29:06.992 Read completed with error (sct=0, sc=8) 00:29:06.992 starting I/O failed 00:29:06.992 Read completed with error (sct=0, sc=8) 00:29:06.992 starting I/O failed 00:29:06.992 Read completed with error (sct=0, sc=8) 00:29:06.992 starting I/O failed 00:29:06.992 Read completed with error (sct=0, sc=8) 00:29:06.992 starting I/O failed 00:29:06.992 Read completed with error (sct=0, sc=8) 00:29:06.992 starting I/O failed 00:29:06.992 Read completed with error (sct=0, sc=8) 00:29:06.992 starting I/O failed 00:29:06.992 Write completed with error (sct=0, sc=8) 00:29:06.992 starting I/O failed 00:29:06.992 Write completed with error (sct=0, sc=8) 00:29:06.992 starting I/O failed 00:29:06.992 Read completed with error (sct=0, sc=8) 00:29:06.992 starting I/O failed 00:29:06.992 Read completed with error (sct=0, sc=8) 00:29:06.992 starting I/O failed 00:29:06.992 Read completed with error (sct=0, sc=8) 00:29:06.992 starting I/O failed 00:29:06.992 Read completed with error (sct=0, sc=8) 00:29:06.992 starting I/O failed 00:29:06.992 Write completed with error (sct=0, sc=8) 00:29:06.992 starting I/O failed 00:29:06.992 Read completed with error (sct=0, sc=8) 00:29:06.992 starting I/O failed 00:29:06.992 Read completed with error (sct=0, sc=8) 00:29:06.992 starting I/O failed 00:29:06.992 Read completed with error (sct=0, sc=8) 00:29:06.992 starting I/O failed 00:29:06.992 Write completed with error (sct=0, sc=8) 00:29:06.992 starting I/O failed 00:29:06.992 Write completed with error (sct=0, sc=8) 00:29:06.992 starting I/O failed 00:29:06.992 Write completed with error (sct=0, sc=8) 00:29:06.992 starting I/O failed 00:29:06.992 Write completed with error (sct=0, sc=8) 00:29:06.992 starting I/O failed 00:29:06.992 Write completed with error (sct=0, sc=8) 00:29:06.992 starting I/O failed 00:29:06.992 Write completed with error (sct=0, sc=8) 00:29:06.992 starting I/O failed 00:29:06.992 Read completed with error (sct=0, sc=8) 00:29:06.992 starting I/O failed 00:29:06.992 Write completed with error (sct=0, sc=8) 00:29:06.992 starting I/O failed 00:29:06.992 Read completed with error (sct=0, sc=8) 00:29:06.992 starting I/O failed 00:29:06.992 Write completed with error (sct=0, sc=8) 00:29:06.992 starting I/O failed 00:29:06.992 Write completed with error (sct=0, sc=8) 00:29:06.992 starting I/O failed 00:29:06.992 Read completed with error (sct=0, sc=8) 00:29:06.992 starting I/O failed 00:29:06.992 Write completed with error (sct=0, sc=8) 00:29:06.992 starting I/O failed 00:29:06.992 [2024-07-24 20:08:54.635572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.992 [2024-07-24 20:08:54.636117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.992 [2024-07-24 20:08:54.636161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.992 qpair failed and we were unable to recover it. 00:29:06.992 [2024-07-24 20:08:54.636740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.992 [2024-07-24 20:08:54.636829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.992 qpair failed and we were unable to recover it. 00:29:06.992 [2024-07-24 20:08:54.637282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.992 [2024-07-24 20:08:54.637328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.992 qpair failed and we were unable to recover it. 00:29:06.992 [2024-07-24 20:08:54.637698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.992 [2024-07-24 20:08:54.637728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.992 qpair failed and we were unable to recover it. 00:29:06.992 [2024-07-24 20:08:54.638256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.992 [2024-07-24 20:08:54.638300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.992 qpair failed and we were unable to recover it. 00:29:06.992 [2024-07-24 20:08:54.638727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.992 [2024-07-24 20:08:54.638755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.992 qpair failed and we were unable to recover it. 00:29:06.992 [2024-07-24 20:08:54.639258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.992 [2024-07-24 20:08:54.639301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.992 qpair failed and we were unable to recover it. 00:29:06.992 [2024-07-24 20:08:54.639779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.992 [2024-07-24 20:08:54.639807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.992 qpair failed and we were unable to recover it. 00:29:06.992 [2024-07-24 20:08:54.640158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.992 [2024-07-24 20:08:54.640186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.992 qpair failed and we were unable to recover it. 00:29:06.992 [2024-07-24 20:08:54.640664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.992 [2024-07-24 20:08:54.640693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.992 qpair failed and we were unable to recover it. 00:29:06.992 [2024-07-24 20:08:54.641180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.992 [2024-07-24 20:08:54.641221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.992 qpair failed and we were unable to recover it. 00:29:06.992 [2024-07-24 20:08:54.641669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.992 [2024-07-24 20:08:54.641699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.992 qpair failed and we were unable to recover it. 00:29:06.992 [2024-07-24 20:08:54.642170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.992 [2024-07-24 20:08:54.642198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.992 qpair failed and we were unable to recover it. 00:29:06.992 [2024-07-24 20:08:54.642457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.992 [2024-07-24 20:08:54.642485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.992 qpair failed and we were unable to recover it. 00:29:06.992 [2024-07-24 20:08:54.642835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.992 [2024-07-24 20:08:54.642864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.992 qpair failed and we were unable to recover it. 00:29:06.992 [2024-07-24 20:08:54.643109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.992 [2024-07-24 20:08:54.643137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.992 qpair failed and we were unable to recover it. 00:29:06.992 [2024-07-24 20:08:54.643580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.992 [2024-07-24 20:08:54.643609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.992 qpair failed and we were unable to recover it. 00:29:06.992 [2024-07-24 20:08:54.643949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.992 [2024-07-24 20:08:54.643977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.992 qpair failed and we were unable to recover it. 00:29:06.992 [2024-07-24 20:08:54.644462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.992 [2024-07-24 20:08:54.644490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.992 qpair failed and we were unable to recover it. 00:29:06.992 [2024-07-24 20:08:54.644957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.992 [2024-07-24 20:08:54.644985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.992 qpair failed and we were unable to recover it. 00:29:06.992 [2024-07-24 20:08:54.645432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.992 [2024-07-24 20:08:54.645460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.992 qpair failed and we were unable to recover it. 00:29:06.993 [2024-07-24 20:08:54.645920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.993 [2024-07-24 20:08:54.645948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.993 qpair failed and we were unable to recover it. 00:29:06.993 [2024-07-24 20:08:54.646484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.993 [2024-07-24 20:08:54.646573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.993 qpair failed and we were unable to recover it. 00:29:06.993 [2024-07-24 20:08:54.647114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.993 [2024-07-24 20:08:54.647161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.993 qpair failed and we were unable to recover it. 00:29:06.993 [2024-07-24 20:08:54.647717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.993 [2024-07-24 20:08:54.647750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.993 qpair failed and we were unable to recover it. 00:29:06.993 [2024-07-24 20:08:54.648263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.993 [2024-07-24 20:08:54.648309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.993 qpair failed and we were unable to recover it. 00:29:06.993 [2024-07-24 20:08:54.648806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.993 [2024-07-24 20:08:54.648836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.993 qpair failed and we were unable to recover it. 00:29:06.993 [2024-07-24 20:08:54.649304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.993 [2024-07-24 20:08:54.649333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.993 qpair failed and we were unable to recover it. 00:29:06.993 [2024-07-24 20:08:54.649673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.993 [2024-07-24 20:08:54.649700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.993 qpair failed and we were unable to recover it. 00:29:06.993 [2024-07-24 20:08:54.650146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.993 [2024-07-24 20:08:54.650173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.993 qpair failed and we were unable to recover it. 00:29:06.993 [2024-07-24 20:08:54.650623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.993 [2024-07-24 20:08:54.650653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.993 qpair failed and we were unable to recover it. 00:29:06.993 [2024-07-24 20:08:54.651121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.993 [2024-07-24 20:08:54.651148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.993 qpair failed and we were unable to recover it. 00:29:06.993 [2024-07-24 20:08:54.651671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.993 [2024-07-24 20:08:54.651701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.993 qpair failed and we were unable to recover it. 00:29:06.993 [2024-07-24 20:08:54.652057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.993 [2024-07-24 20:08:54.652085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.993 qpair failed and we were unable to recover it. 00:29:06.993 [2024-07-24 20:08:54.652553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.993 [2024-07-24 20:08:54.652583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.993 qpair failed and we were unable to recover it. 00:29:06.993 [2024-07-24 20:08:54.652941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.993 [2024-07-24 20:08:54.652969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.993 qpair failed and we were unable to recover it. 00:29:06.993 [2024-07-24 20:08:54.653479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.993 [2024-07-24 20:08:54.653508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.993 qpair failed and we were unable to recover it. 00:29:06.993 [2024-07-24 20:08:54.654007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.993 [2024-07-24 20:08:54.654035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.993 qpair failed and we were unable to recover it. 00:29:06.993 [2024-07-24 20:08:54.654573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.993 [2024-07-24 20:08:54.654661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.993 qpair failed and we were unable to recover it. 00:29:06.993 [2024-07-24 20:08:54.655239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.993 [2024-07-24 20:08:54.655278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.993 qpair failed and we were unable to recover it. 00:29:06.993 [2024-07-24 20:08:54.655746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.993 [2024-07-24 20:08:54.655775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.993 qpair failed and we were unable to recover it. 00:29:06.993 [2024-07-24 20:08:54.656123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.993 [2024-07-24 20:08:54.656151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.993 qpair failed and we were unable to recover it. 00:29:06.993 [2024-07-24 20:08:54.656633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.993 [2024-07-24 20:08:54.656665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.993 qpair failed and we were unable to recover it. 00:29:06.993 [2024-07-24 20:08:54.657137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.993 [2024-07-24 20:08:54.657165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.993 qpair failed and we were unable to recover it. 00:29:06.993 [2024-07-24 20:08:54.657572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.993 [2024-07-24 20:08:54.657601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.993 qpair failed and we were unable to recover it. 00:29:06.993 [2024-07-24 20:08:54.658062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.993 [2024-07-24 20:08:54.658089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.993 qpair failed and we were unable to recover it. 00:29:06.993 [2024-07-24 20:08:54.658539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.993 [2024-07-24 20:08:54.658568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.993 qpair failed and we were unable to recover it. 00:29:06.993 [2024-07-24 20:08:54.659030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.993 [2024-07-24 20:08:54.659059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.993 qpair failed and we were unable to recover it. 00:29:06.993 [2024-07-24 20:08:54.659507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.994 [2024-07-24 20:08:54.659536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.994 qpair failed and we were unable to recover it. 00:29:06.994 [2024-07-24 20:08:54.659996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.994 [2024-07-24 20:08:54.660024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.994 qpair failed and we were unable to recover it. 00:29:06.994 [2024-07-24 20:08:54.660473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.994 [2024-07-24 20:08:54.660505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.994 qpair failed and we were unable to recover it. 00:29:06.994 [2024-07-24 20:08:54.660954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.994 [2024-07-24 20:08:54.660981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.994 qpair failed and we were unable to recover it. 00:29:06.994 [2024-07-24 20:08:54.661253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.994 [2024-07-24 20:08:54.661280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.994 qpair failed and we were unable to recover it. 00:29:06.994 [2024-07-24 20:08:54.661746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.994 [2024-07-24 20:08:54.661774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.994 qpair failed and we were unable to recover it. 00:29:06.994 [2024-07-24 20:08:54.662231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.994 [2024-07-24 20:08:54.662260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.994 qpair failed and we were unable to recover it. 00:29:06.994 [2024-07-24 20:08:54.662802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.994 [2024-07-24 20:08:54.662830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.994 qpair failed and we were unable to recover it. 00:29:06.994 [2024-07-24 20:08:54.663291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.994 [2024-07-24 20:08:54.663320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.994 qpair failed and we were unable to recover it. 00:29:06.994 [2024-07-24 20:08:54.663790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.994 [2024-07-24 20:08:54.663818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.994 qpair failed and we were unable to recover it. 00:29:06.994 [2024-07-24 20:08:54.664311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.994 [2024-07-24 20:08:54.664340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.994 qpair failed and we were unable to recover it. 00:29:06.994 [2024-07-24 20:08:54.664808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.994 [2024-07-24 20:08:54.664836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.994 qpair failed and we were unable to recover it. 00:29:06.994 [2024-07-24 20:08:54.665274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.994 [2024-07-24 20:08:54.665302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.994 qpair failed and we were unable to recover it. 00:29:06.994 [2024-07-24 20:08:54.665760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.994 [2024-07-24 20:08:54.665788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.994 qpair failed and we were unable to recover it. 00:29:06.994 [2024-07-24 20:08:54.666245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.994 [2024-07-24 20:08:54.666275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.994 qpair failed and we were unable to recover it. 00:29:06.994 [2024-07-24 20:08:54.666794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.994 [2024-07-24 20:08:54.666828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.994 qpair failed and we were unable to recover it. 00:29:06.994 [2024-07-24 20:08:54.667292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.994 [2024-07-24 20:08:54.667322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.994 qpair failed and we were unable to recover it. 00:29:06.994 [2024-07-24 20:08:54.667769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.994 [2024-07-24 20:08:54.667797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.994 qpair failed and we were unable to recover it. 00:29:06.994 [2024-07-24 20:08:54.668070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.994 [2024-07-24 20:08:54.668097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.994 qpair failed and we were unable to recover it. 00:29:06.994 [2024-07-24 20:08:54.668553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.994 [2024-07-24 20:08:54.668582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.994 qpair failed and we were unable to recover it. 00:29:06.994 [2024-07-24 20:08:54.668989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.994 [2024-07-24 20:08:54.669016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.994 qpair failed and we were unable to recover it. 00:29:06.994 [2024-07-24 20:08:54.669488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.994 [2024-07-24 20:08:54.669517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.994 qpair failed and we were unable to recover it. 00:29:06.994 [2024-07-24 20:08:54.669966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.994 [2024-07-24 20:08:54.669995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.994 qpair failed and we were unable to recover it. 00:29:06.994 [2024-07-24 20:08:54.670348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.994 [2024-07-24 20:08:54.670382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.994 qpair failed and we were unable to recover it. 00:29:06.994 [2024-07-24 20:08:54.670860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.994 [2024-07-24 20:08:54.670889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.994 qpair failed and we were unable to recover it. 00:29:06.994 [2024-07-24 20:08:54.671353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.994 [2024-07-24 20:08:54.671381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.994 qpair failed and we were unable to recover it. 00:29:06.994 [2024-07-24 20:08:54.671886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.994 [2024-07-24 20:08:54.671914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.994 qpair failed and we were unable to recover it. 00:29:06.994 [2024-07-24 20:08:54.672382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.994 [2024-07-24 20:08:54.672411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.994 qpair failed and we were unable to recover it. 00:29:06.994 [2024-07-24 20:08:54.672913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.994 [2024-07-24 20:08:54.672940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.994 qpair failed and we were unable to recover it. 00:29:06.994 [2024-07-24 20:08:54.673395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.994 [2024-07-24 20:08:54.673424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.994 qpair failed and we were unable to recover it. 00:29:06.994 [2024-07-24 20:08:54.673747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.994 [2024-07-24 20:08:54.673774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.994 qpair failed and we were unable to recover it. 00:29:06.994 [2024-07-24 20:08:54.674221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.994 [2024-07-24 20:08:54.674251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.994 qpair failed and we were unable to recover it. 00:29:06.994 [2024-07-24 20:08:54.674694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.995 [2024-07-24 20:08:54.674722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.995 qpair failed and we were unable to recover it. 00:29:06.995 [2024-07-24 20:08:54.675180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.995 [2024-07-24 20:08:54.675215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.995 qpair failed and we were unable to recover it. 00:29:06.995 [2024-07-24 20:08:54.675701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.995 [2024-07-24 20:08:54.675728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.995 qpair failed and we were unable to recover it. 00:29:06.995 [2024-07-24 20:08:54.676240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.995 [2024-07-24 20:08:54.676270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.995 qpair failed and we were unable to recover it. 00:29:06.995 [2024-07-24 20:08:54.676620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.995 [2024-07-24 20:08:54.676648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.995 qpair failed and we were unable to recover it. 00:29:06.995 [2024-07-24 20:08:54.676913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.995 [2024-07-24 20:08:54.676942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.995 qpair failed and we were unable to recover it. 00:29:06.995 [2024-07-24 20:08:54.677530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.995 [2024-07-24 20:08:54.677620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.995 qpair failed and we were unable to recover it. 00:29:06.995 [2024-07-24 20:08:54.678147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.995 [2024-07-24 20:08:54.678182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.995 qpair failed and we were unable to recover it. 00:29:06.995 [2024-07-24 20:08:54.678628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.995 [2024-07-24 20:08:54.678659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.995 qpair failed and we were unable to recover it. 00:29:06.995 [2024-07-24 20:08:54.679062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.995 [2024-07-24 20:08:54.679091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.995 qpair failed and we were unable to recover it. 00:29:06.995 [2024-07-24 20:08:54.679612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.995 [2024-07-24 20:08:54.679645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.995 qpair failed and we were unable to recover it. 00:29:06.995 [2024-07-24 20:08:54.680039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.995 [2024-07-24 20:08:54.680068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.995 qpair failed and we were unable to recover it. 00:29:06.995 [2024-07-24 20:08:54.680583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.995 [2024-07-24 20:08:54.680614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.995 qpair failed and we were unable to recover it. 00:29:06.995 [2024-07-24 20:08:54.681076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.995 [2024-07-24 20:08:54.681105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.995 qpair failed and we were unable to recover it. 00:29:06.995 [2024-07-24 20:08:54.681559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.995 [2024-07-24 20:08:54.681595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.995 qpair failed and we were unable to recover it. 00:29:06.995 [2024-07-24 20:08:54.682043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.995 [2024-07-24 20:08:54.682071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.995 qpair failed and we were unable to recover it. 00:29:06.995 [2024-07-24 20:08:54.682422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.995 [2024-07-24 20:08:54.682451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.995 qpair failed and we were unable to recover it. 00:29:06.995 [2024-07-24 20:08:54.682909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.995 [2024-07-24 20:08:54.682936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.995 qpair failed and we were unable to recover it. 00:29:06.995 [2024-07-24 20:08:54.683386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.995 [2024-07-24 20:08:54.683415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.995 qpair failed and we were unable to recover it. 00:29:06.995 [2024-07-24 20:08:54.683651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:06.995 [2024-07-24 20:08:54.683865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.995 [2024-07-24 20:08:54.683898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.995 qpair failed and we were unable to recover it. 00:29:06.995 [2024-07-24 20:08:54.684261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.995 [2024-07-24 20:08:54.684291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.995 qpair failed and we were unable to recover it. 00:29:06.995 [2024-07-24 20:08:54.684767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.995 [2024-07-24 20:08:54.684795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.995 qpair failed and we were unable to recover it. 00:29:06.995 [2024-07-24 20:08:54.685248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.995 [2024-07-24 20:08:54.685277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.995 qpair failed and we were unable to recover it. 00:29:06.995 [2024-07-24 20:08:54.685743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.995 [2024-07-24 20:08:54.685778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.995 qpair failed and we were unable to recover it. 00:29:06.995 [2024-07-24 20:08:54.686237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.995 [2024-07-24 20:08:54.686267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.995 qpair failed and we were unable to recover it. 00:29:06.995 [2024-07-24 20:08:54.686736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.995 [2024-07-24 20:08:54.686764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.995 qpair failed and we were unable to recover it. 00:29:06.995 [2024-07-24 20:08:54.687219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.995 [2024-07-24 20:08:54.687248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.996 qpair failed and we were unable to recover it. 00:29:06.996 [2024-07-24 20:08:54.687811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.996 [2024-07-24 20:08:54.687839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.996 qpair failed and we were unable to recover it. 00:29:06.996 [2024-07-24 20:08:54.688292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.996 [2024-07-24 20:08:54.688323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.996 qpair failed and we were unable to recover it. 00:29:06.996 [2024-07-24 20:08:54.688766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.996 [2024-07-24 20:08:54.688794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.996 qpair failed and we were unable to recover it. 00:29:06.996 [2024-07-24 20:08:54.689138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.996 [2024-07-24 20:08:54.689173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.996 qpair failed and we were unable to recover it. 00:29:06.996 [2024-07-24 20:08:54.689542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.996 [2024-07-24 20:08:54.689576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.996 qpair failed and we were unable to recover it. 00:29:06.996 [2024-07-24 20:08:54.689946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.996 [2024-07-24 20:08:54.689975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.996 qpair failed and we were unable to recover it. 00:29:06.996 [2024-07-24 20:08:54.690422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.996 [2024-07-24 20:08:54.690452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.996 qpair failed and we were unable to recover it. 00:29:06.996 [2024-07-24 20:08:54.690899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.996 [2024-07-24 20:08:54.690926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.996 qpair failed and we were unable to recover it. 00:29:06.996 [2024-07-24 20:08:54.691440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.996 [2024-07-24 20:08:54.691469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.996 qpair failed and we were unable to recover it. 00:29:06.996 [2024-07-24 20:08:54.691919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.996 [2024-07-24 20:08:54.691946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.996 qpair failed and we were unable to recover it. 00:29:06.996 [2024-07-24 20:08:54.692390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.996 [2024-07-24 20:08:54.692421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.996 qpair failed and we were unable to recover it. 00:29:06.996 [2024-07-24 20:08:54.692905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.996 [2024-07-24 20:08:54.692932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.996 qpair failed and we were unable to recover it. 00:29:06.996 [2024-07-24 20:08:54.693398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.996 [2024-07-24 20:08:54.693427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.996 qpair failed and we were unable to recover it. 00:29:06.996 [2024-07-24 20:08:54.693791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.996 [2024-07-24 20:08:54.693819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.996 qpair failed and we were unable to recover it. 00:29:06.996 [2024-07-24 20:08:54.694286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.996 [2024-07-24 20:08:54.694317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.996 qpair failed and we were unable to recover it. 00:29:06.996 [2024-07-24 20:08:54.694797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.996 [2024-07-24 20:08:54.694825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.996 qpair failed and we were unable to recover it. 00:29:06.996 [2024-07-24 20:08:54.695294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.996 [2024-07-24 20:08:54.695323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.996 qpair failed and we were unable to recover it. 00:29:06.996 [2024-07-24 20:08:54.695776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.996 [2024-07-24 20:08:54.695804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.996 qpair failed and we were unable to recover it. 00:29:06.996 [2024-07-24 20:08:54.696273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.996 [2024-07-24 20:08:54.696303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.996 qpair failed and we were unable to recover it. 00:29:06.996 [2024-07-24 20:08:54.696748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.996 [2024-07-24 20:08:54.696775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.996 qpair failed and we were unable to recover it. 00:29:06.996 [2024-07-24 20:08:54.697255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.996 [2024-07-24 20:08:54.697283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.996 qpair failed and we were unable to recover it. 00:29:06.996 [2024-07-24 20:08:54.697732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.996 [2024-07-24 20:08:54.697760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.996 qpair failed and we were unable to recover it. 00:29:06.996 [2024-07-24 20:08:54.698245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.996 [2024-07-24 20:08:54.698274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.996 qpair failed and we were unable to recover it. 00:29:06.996 [2024-07-24 20:08:54.698717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.996 [2024-07-24 20:08:54.698745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.996 qpair failed and we were unable to recover it. 00:29:06.996 [2024-07-24 20:08:54.699213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.996 [2024-07-24 20:08:54.699242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.996 qpair failed and we were unable to recover it. 00:29:06.996 [2024-07-24 20:08:54.699733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.996 [2024-07-24 20:08:54.699762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.996 qpair failed and we were unable to recover it. 00:29:06.996 [2024-07-24 20:08:54.700232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.996 [2024-07-24 20:08:54.700261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.996 qpair failed and we were unable to recover it. 00:29:06.996 [2024-07-24 20:08:54.700707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.996 [2024-07-24 20:08:54.700735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.996 qpair failed and we were unable to recover it. 00:29:06.997 [2024-07-24 20:08:54.701229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-07-24 20:08:54.701258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.997 qpair failed and we were unable to recover it. 00:29:06.997 [2024-07-24 20:08:54.701713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-07-24 20:08:54.701742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.997 qpair failed and we were unable to recover it. 00:29:06.997 [2024-07-24 20:08:54.702211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-07-24 20:08:54.702239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.997 qpair failed and we were unable to recover it. 00:29:06.997 [2024-07-24 20:08:54.702708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-07-24 20:08:54.702736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.997 qpair failed and we were unable to recover it. 00:29:06.997 [2024-07-24 20:08:54.703198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-07-24 20:08:54.703235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.997 qpair failed and we were unable to recover it. 00:29:06.997 [2024-07-24 20:08:54.703699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-07-24 20:08:54.703727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.997 qpair failed and we were unable to recover it. 00:29:06.997 [2024-07-24 20:08:54.704087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-07-24 20:08:54.704115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.997 qpair failed and we were unable to recover it. 00:29:06.997 [2024-07-24 20:08:54.704646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-07-24 20:08:54.704734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.997 qpair failed and we were unable to recover it. 00:29:06.997 [2024-07-24 20:08:54.705253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-07-24 20:08:54.705313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.997 qpair failed and we were unable to recover it. 00:29:06.997 [2024-07-24 20:08:54.705793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-07-24 20:08:54.705823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.997 qpair failed and we were unable to recover it. 00:29:06.997 [2024-07-24 20:08:54.706288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-07-24 20:08:54.706318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.997 qpair failed and we were unable to recover it. 00:29:06.997 [2024-07-24 20:08:54.706802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-07-24 20:08:54.706830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.997 qpair failed and we were unable to recover it. 00:29:06.997 [2024-07-24 20:08:54.707301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-07-24 20:08:54.707331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.997 qpair failed and we were unable to recover it. 00:29:06.997 [2024-07-24 20:08:54.707602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-07-24 20:08:54.707629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.997 qpair failed and we were unable to recover it. 00:29:06.997 [2024-07-24 20:08:54.708070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-07-24 20:08:54.708098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.997 qpair failed and we were unable to recover it. 00:29:06.997 [2024-07-24 20:08:54.708386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-07-24 20:08:54.708414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.997 qpair failed and we were unable to recover it. 00:29:06.997 [2024-07-24 20:08:54.708754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-07-24 20:08:54.708782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.997 qpair failed and we were unable to recover it. 00:29:06.997 [2024-07-24 20:08:54.709239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-07-24 20:08:54.709268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.997 qpair failed and we were unable to recover it. 00:29:06.997 [2024-07-24 20:08:54.709699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-07-24 20:08:54.709727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.997 qpair failed and we were unable to recover it. 00:29:06.997 [2024-07-24 20:08:54.710231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-07-24 20:08:54.710262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.997 qpair failed and we were unable to recover it. 00:29:06.997 [2024-07-24 20:08:54.710708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-07-24 20:08:54.710736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.997 qpair failed and we were unable to recover it. 00:29:06.997 [2024-07-24 20:08:54.711187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-07-24 20:08:54.711224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.997 qpair failed and we were unable to recover it. 00:29:06.997 [2024-07-24 20:08:54.711531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-07-24 20:08:54.711561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.997 qpair failed and we were unable to recover it. 00:29:06.997 [2024-07-24 20:08:54.712059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-07-24 20:08:54.712087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.997 qpair failed and we were unable to recover it. 00:29:06.997 [2024-07-24 20:08:54.712608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-07-24 20:08:54.712637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.997 qpair failed and we were unable to recover it. 00:29:06.997 [2024-07-24 20:08:54.713082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-07-24 20:08:54.713110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.997 qpair failed and we were unable to recover it. 00:29:06.997 [2024-07-24 20:08:54.713464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-07-24 20:08:54.713494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.997 qpair failed and we were unable to recover it. 00:29:06.997 [2024-07-24 20:08:54.713741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-07-24 20:08:54.713767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.997 qpair failed and we were unable to recover it. 00:29:06.997 [2024-07-24 20:08:54.714222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-07-24 20:08:54.714252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.997 qpair failed and we were unable to recover it. 00:29:06.997 [2024-07-24 20:08:54.714745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-07-24 20:08:54.714774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.997 qpair failed and we were unable to recover it. 00:29:06.997 [2024-07-24 20:08:54.715228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-07-24 20:08:54.715257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.997 qpair failed and we were unable to recover it. 00:29:06.997 [2024-07-24 20:08:54.715707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-07-24 20:08:54.715735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.997 qpair failed and we were unable to recover it. 00:29:06.997 [2024-07-24 20:08:54.716208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-07-24 20:08:54.716237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.997 qpair failed and we were unable to recover it. 00:29:06.997 [2024-07-24 20:08:54.716693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-07-24 20:08:54.716722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.997 qpair failed and we were unable to recover it. 00:29:06.997 [2024-07-24 20:08:54.717188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-07-24 20:08:54.717224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.997 qpair failed and we were unable to recover it. 00:29:06.997 [2024-07-24 20:08:54.717714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-07-24 20:08:54.717742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.997 qpair failed and we were unable to recover it. 00:29:06.998 [2024-07-24 20:08:54.718217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-07-24 20:08:54.718247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.998 qpair failed and we were unable to recover it. 00:29:06.998 [2024-07-24 20:08:54.718718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-07-24 20:08:54.718747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.998 qpair failed and we were unable to recover it. 00:29:06.998 [2024-07-24 20:08:54.719198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-07-24 20:08:54.719235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.998 qpair failed and we were unable to recover it. 00:29:06.998 [2024-07-24 20:08:54.719712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-07-24 20:08:54.719742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.998 qpair failed and we were unable to recover it. 00:29:06.998 [2024-07-24 20:08:54.720092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-07-24 20:08:54.720120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.998 qpair failed and we were unable to recover it. 00:29:06.998 [2024-07-24 20:08:54.720561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-07-24 20:08:54.720651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.998 qpair failed and we were unable to recover it. 00:29:06.998 [2024-07-24 20:08:54.721176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-07-24 20:08:54.721224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.998 qpair failed and we were unable to recover it. 00:29:06.998 [2024-07-24 20:08:54.721702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-07-24 20:08:54.721733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.998 qpair failed and we were unable to recover it. 00:29:06.998 [2024-07-24 20:08:54.722177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-07-24 20:08:54.722216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.998 qpair failed and we were unable to recover it. 00:29:06.998 [2024-07-24 20:08:54.722458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-07-24 20:08:54.722486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.998 qpair failed and we were unable to recover it. 00:29:06.998 [2024-07-24 20:08:54.722947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-07-24 20:08:54.722975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.998 qpair failed and we were unable to recover it. 00:29:06.998 [2024-07-24 20:08:54.723513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-07-24 20:08:54.723601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.998 qpair failed and we were unable to recover it. 00:29:06.998 [2024-07-24 20:08:54.724145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-07-24 20:08:54.724191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.998 qpair failed and we were unable to recover it. 00:29:06.998 [2024-07-24 20:08:54.724681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-07-24 20:08:54.724712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.998 qpair failed and we were unable to recover it. 00:29:06.998 [2024-07-24 20:08:54.725176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-07-24 20:08:54.725215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.998 qpair failed and we were unable to recover it. 00:29:06.998 [2024-07-24 20:08:54.725674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-07-24 20:08:54.725703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.998 qpair failed and we were unable to recover it. 00:29:06.998 [2024-07-24 20:08:54.726165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-07-24 20:08:54.726193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.998 qpair failed and we were unable to recover it. 00:29:06.998 [2024-07-24 20:08:54.726636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-07-24 20:08:54.726665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.998 qpair failed and we were unable to recover it. 00:29:06.998 [2024-07-24 20:08:54.727135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-07-24 20:08:54.727163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.998 qpair failed and we were unable to recover it. 00:29:06.998 [2024-07-24 20:08:54.727551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-07-24 20:08:54.727640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.998 qpair failed and we were unable to recover it. 00:29:06.998 [2024-07-24 20:08:54.728171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-07-24 20:08:54.728226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.998 qpair failed and we were unable to recover it. 00:29:06.998 [2024-07-24 20:08:54.728657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-07-24 20:08:54.728687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.998 qpair failed and we were unable to recover it. 00:29:06.998 [2024-07-24 20:08:54.729154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-07-24 20:08:54.729182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.998 qpair failed and we were unable to recover it. 00:29:06.998 [2024-07-24 20:08:54.729683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-07-24 20:08:54.729712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.998 qpair failed and we were unable to recover it. 00:29:06.998 [2024-07-24 20:08:54.730154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-07-24 20:08:54.730183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.998 qpair failed and we were unable to recover it. 00:29:06.998 [2024-07-24 20:08:54.730654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-07-24 20:08:54.730683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.998 qpair failed and we were unable to recover it. 00:29:06.998 [2024-07-24 20:08:54.731064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-07-24 20:08:54.731092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.998 qpair failed and we were unable to recover it. 00:29:06.998 [2024-07-24 20:08:54.731537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-07-24 20:08:54.731568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.998 qpair failed and we were unable to recover it. 00:29:06.998 [2024-07-24 20:08:54.732032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-07-24 20:08:54.732060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.998 qpair failed and we were unable to recover it. 00:29:06.998 [2024-07-24 20:08:54.732506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-07-24 20:08:54.732536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.998 qpair failed and we were unable to recover it. 00:29:06.998 [2024-07-24 20:08:54.733005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-07-24 20:08:54.733033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.998 qpair failed and we were unable to recover it. 00:29:06.998 [2024-07-24 20:08:54.733565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-07-24 20:08:54.733653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.998 qpair failed and we were unable to recover it. 00:29:06.998 [2024-07-24 20:08:54.734175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-07-24 20:08:54.734223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.998 qpair failed and we were unable to recover it. 00:29:06.998 [2024-07-24 20:08:54.734692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-07-24 20:08:54.734722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.998 qpair failed and we were unable to recover it. 00:29:06.998 [2024-07-24 20:08:54.735188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-07-24 20:08:54.735296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.998 qpair failed and we were unable to recover it. 00:29:06.998 [2024-07-24 20:08:54.735655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-07-24 20:08:54.735683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.998 qpair failed and we were unable to recover it. 00:29:06.998 [2024-07-24 20:08:54.736155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.999 [2024-07-24 20:08:54.736184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.999 qpair failed and we were unable to recover it. 00:29:06.999 [2024-07-24 20:08:54.736644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.999 [2024-07-24 20:08:54.736673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.999 qpair failed and we were unable to recover it. 00:29:06.999 [2024-07-24 20:08:54.737137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.999 [2024-07-24 20:08:54.737164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.999 qpair failed and we were unable to recover it. 00:29:06.999 [2024-07-24 20:08:54.737597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.999 [2024-07-24 20:08:54.737627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.999 qpair failed and we were unable to recover it. 00:29:06.999 [2024-07-24 20:08:54.738106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.999 [2024-07-24 20:08:54.738134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.999 qpair failed and we were unable to recover it. 00:29:06.999 [2024-07-24 20:08:54.738706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.999 [2024-07-24 20:08:54.738794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.999 qpair failed and we were unable to recover it. 00:29:06.999 [2024-07-24 20:08:54.739403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.999 [2024-07-24 20:08:54.739490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.999 qpair failed and we were unable to recover it. 00:29:06.999 [2024-07-24 20:08:54.740045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.999 [2024-07-24 20:08:54.740082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.999 qpair failed and we were unable to recover it. 00:29:06.999 [2024-07-24 20:08:54.740550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.999 [2024-07-24 20:08:54.740583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.999 qpair failed and we were unable to recover it. 00:29:06.999 [2024-07-24 20:08:54.741016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.999 [2024-07-24 20:08:54.741044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.999 qpair failed and we were unable to recover it. 00:29:06.999 [2024-07-24 20:08:54.741493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.999 [2024-07-24 20:08:54.741521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.999 qpair failed and we were unable to recover it. 00:29:06.999 [2024-07-24 20:08:54.741972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.999 [2024-07-24 20:08:54.742000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.999 qpair failed and we were unable to recover it. 00:29:06.999 [2024-07-24 20:08:54.742456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.999 [2024-07-24 20:08:54.742486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.999 qpair failed and we were unable to recover it. 00:29:06.999 [2024-07-24 20:08:54.742935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.999 [2024-07-24 20:08:54.742963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.999 qpair failed and we were unable to recover it. 00:29:06.999 [2024-07-24 20:08:54.743422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.999 [2024-07-24 20:08:54.743452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.999 qpair failed and we were unable to recover it. 00:29:06.999 [2024-07-24 20:08:54.743939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.999 [2024-07-24 20:08:54.743967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.999 qpair failed and we were unable to recover it. 00:29:06.999 [2024-07-24 20:08:54.744413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.999 [2024-07-24 20:08:54.744512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.999 qpair failed and we were unable to recover it. 00:29:06.999 [2024-07-24 20:08:54.745058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.999 [2024-07-24 20:08:54.745095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.999 qpair failed and we were unable to recover it. 00:29:06.999 [2024-07-24 20:08:54.745442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.999 [2024-07-24 20:08:54.745479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.999 qpair failed and we were unable to recover it. 00:29:06.999 [2024-07-24 20:08:54.745909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.999 [2024-07-24 20:08:54.745939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.999 qpair failed and we were unable to recover it. 00:29:06.999 [2024-07-24 20:08:54.746405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.999 [2024-07-24 20:08:54.746437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.999 qpair failed and we were unable to recover it. 00:29:06.999 [2024-07-24 20:08:54.746779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.999 [2024-07-24 20:08:54.746814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.999 qpair failed and we were unable to recover it. 00:29:06.999 [2024-07-24 20:08:54.747298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.999 [2024-07-24 20:08:54.747328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.999 qpair failed and we were unable to recover it. 00:29:06.999 [2024-07-24 20:08:54.747778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.999 [2024-07-24 20:08:54.747806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.999 qpair failed and we were unable to recover it. 00:29:06.999 [2024-07-24 20:08:54.748277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.999 [2024-07-24 20:08:54.748307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.999 qpair failed and we were unable to recover it. 00:29:06.999 [2024-07-24 20:08:54.748779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.999 [2024-07-24 20:08:54.748807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.999 qpair failed and we were unable to recover it. 00:29:06.999 [2024-07-24 20:08:54.749279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.999 [2024-07-24 20:08:54.749308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.999 qpair failed and we were unable to recover it. 00:29:06.999 [2024-07-24 20:08:54.749766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.999 [2024-07-24 20:08:54.749794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.999 qpair failed and we were unable to recover it. 00:29:06.999 [2024-07-24 20:08:54.750270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.999 [2024-07-24 20:08:54.750300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.999 qpair failed and we were unable to recover it. 00:29:06.999 [2024-07-24 20:08:54.750436] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:06.999 [2024-07-24 20:08:54.750465] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:06.999 [2024-07-24 20:08:54.750476] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:06.999 [2024-07-24 20:08:54.750483] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:06.999 [2024-07-24 20:08:54.750488] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:06.999 [2024-07-24 20:08:54.750660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.999 [2024-07-24 20:08:54.750692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.999 qpair failed and we were unable to recover it. 00:29:06.999 [2024-07-24 20:08:54.750656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:06.999 [2024-07-24 20:08:54.750802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:06.999 [2024-07-24 20:08:54.750927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:06.999 [2024-07-24 20:08:54.750929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:06.999 [2024-07-24 20:08:54.751143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.999 [2024-07-24 20:08:54.751171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.999 qpair failed and we were unable to recover it. 00:29:06.999 [2024-07-24 20:08:54.751638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.999 [2024-07-24 20:08:54.751670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.999 qpair failed and we were unable to recover it. 00:29:06.999 [2024-07-24 20:08:54.752009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.999 [2024-07-24 20:08:54.752041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:06.999 qpair failed and we were unable to recover it. 00:29:06.999 [2024-07-24 20:08:54.752511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.000 [2024-07-24 20:08:54.752541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.000 qpair failed and we were unable to recover it. 00:29:07.000 [2024-07-24 20:08:54.753003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.000 [2024-07-24 20:08:54.753030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.000 qpair failed and we were unable to recover it. 00:29:07.000 [2024-07-24 20:08:54.753393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.000 [2024-07-24 20:08:54.753421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.000 qpair failed and we were unable to recover it. 00:29:07.000 [2024-07-24 20:08:54.753878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.000 [2024-07-24 20:08:54.753905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.000 qpair failed and we were unable to recover it. 00:29:07.000 [2024-07-24 20:08:54.754266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.000 [2024-07-24 20:08:54.754296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.000 qpair failed and we were unable to recover it. 00:29:07.000 [2024-07-24 20:08:54.754752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.000 [2024-07-24 20:08:54.754780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.000 qpair failed and we were unable to recover it. 00:29:07.000 [2024-07-24 20:08:54.755232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.000 [2024-07-24 20:08:54.755261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.000 qpair failed and we were unable to recover it. 00:29:07.000 [2024-07-24 20:08:54.755729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.000 [2024-07-24 20:08:54.755757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.000 qpair failed and we were unable to recover it. 00:29:07.000 [2024-07-24 20:08:54.756093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.000 [2024-07-24 20:08:54.756124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.000 qpair failed and we were unable to recover it. 00:29:07.000 [2024-07-24 20:08:54.756649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.000 [2024-07-24 20:08:54.756678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.000 qpair failed and we were unable to recover it. 00:29:07.000 [2024-07-24 20:08:54.757135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.000 [2024-07-24 20:08:54.757164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.000 qpair failed and we were unable to recover it. 00:29:07.000 [2024-07-24 20:08:54.757530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.000 [2024-07-24 20:08:54.757560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.000 qpair failed and we were unable to recover it. 00:29:07.000 [2024-07-24 20:08:54.757996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.000 [2024-07-24 20:08:54.758024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.000 qpair failed and we were unable to recover it. 00:29:07.000 [2024-07-24 20:08:54.758409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.000 [2024-07-24 20:08:54.758439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.000 qpair failed and we were unable to recover it. 00:29:07.000 [2024-07-24 20:08:54.758817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.000 [2024-07-24 20:08:54.758849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.000 qpair failed and we were unable to recover it. 00:29:07.000 [2024-07-24 20:08:54.759317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.000 [2024-07-24 20:08:54.759347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.000 qpair failed and we were unable to recover it. 00:29:07.000 [2024-07-24 20:08:54.759829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.000 [2024-07-24 20:08:54.759857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.000 qpair failed and we were unable to recover it. 00:29:07.000 [2024-07-24 20:08:54.760325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.000 [2024-07-24 20:08:54.760354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.000 qpair failed and we were unable to recover it. 00:29:07.000 [2024-07-24 20:08:54.760813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.000 [2024-07-24 20:08:54.760841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.000 qpair failed and we were unable to recover it. 00:29:07.000 [2024-07-24 20:08:54.761178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.000 [2024-07-24 20:08:54.761213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.000 qpair failed and we were unable to recover it. 00:29:07.000 [2024-07-24 20:08:54.761650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.000 [2024-07-24 20:08:54.761680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.000 qpair failed and we were unable to recover it. 00:29:07.000 [2024-07-24 20:08:54.761934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.000 [2024-07-24 20:08:54.761960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.000 qpair failed and we were unable to recover it. 00:29:07.000 [2024-07-24 20:08:54.762440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.000 [2024-07-24 20:08:54.762470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.000 qpair failed and we were unable to recover it. 00:29:07.000 [2024-07-24 20:08:54.762920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.000 [2024-07-24 20:08:54.762949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.000 qpair failed and we were unable to recover it. 00:29:07.000 [2024-07-24 20:08:54.763252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.000 [2024-07-24 20:08:54.763282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.000 qpair failed and we were unable to recover it. 00:29:07.000 [2024-07-24 20:08:54.763785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.000 [2024-07-24 20:08:54.763814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.000 qpair failed and we were unable to recover it. 00:29:07.000 [2024-07-24 20:08:54.764258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.000 [2024-07-24 20:08:54.764286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.000 qpair failed and we were unable to recover it. 00:29:07.000 [2024-07-24 20:08:54.764765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.000 [2024-07-24 20:08:54.764793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.000 qpair failed and we were unable to recover it. 00:29:07.000 [2024-07-24 20:08:54.765246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.000 [2024-07-24 20:08:54.765275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.000 qpair failed and we were unable to recover it. 00:29:07.000 [2024-07-24 20:08:54.765713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.000 [2024-07-24 20:08:54.765742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.000 qpair failed and we were unable to recover it. 00:29:07.000 [2024-07-24 20:08:54.766214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.000 [2024-07-24 20:08:54.766244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.000 qpair failed and we were unable to recover it. 00:29:07.000 [2024-07-24 20:08:54.766723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.000 [2024-07-24 20:08:54.766752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.000 qpair failed and we were unable to recover it. 00:29:07.000 [2024-07-24 20:08:54.767199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.001 [2024-07-24 20:08:54.767238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.001 qpair failed and we were unable to recover it. 00:29:07.001 [2024-07-24 20:08:54.767768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.001 [2024-07-24 20:08:54.767803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.001 qpair failed and we were unable to recover it. 00:29:07.001 [2024-07-24 20:08:54.768437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.001 [2024-07-24 20:08:54.768529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.001 qpair failed and we were unable to recover it. 00:29:07.001 [2024-07-24 20:08:54.769056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.001 [2024-07-24 20:08:54.769092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.001 qpair failed and we were unable to recover it. 00:29:07.001 [2024-07-24 20:08:54.769611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.001 [2024-07-24 20:08:54.769641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.001 qpair failed and we were unable to recover it. 00:29:07.001 [2024-07-24 20:08:54.770110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.001 [2024-07-24 20:08:54.770139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.001 qpair failed and we were unable to recover it. 00:29:07.001 [2024-07-24 20:08:54.770500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.001 [2024-07-24 20:08:54.770530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.001 qpair failed and we were unable to recover it. 00:29:07.001 [2024-07-24 20:08:54.770979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.001 [2024-07-24 20:08:54.771008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.001 qpair failed and we were unable to recover it. 00:29:07.001 [2024-07-24 20:08:54.771537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.001 [2024-07-24 20:08:54.771627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.001 qpair failed and we were unable to recover it. 00:29:07.001 [2024-07-24 20:08:54.772065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.001 [2024-07-24 20:08:54.772105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.001 qpair failed and we were unable to recover it. 00:29:07.001 [2024-07-24 20:08:54.772434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.001 [2024-07-24 20:08:54.772466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.001 qpair failed and we were unable to recover it. 00:29:07.001 [2024-07-24 20:08:54.772797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.001 [2024-07-24 20:08:54.772825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.001 qpair failed and we were unable to recover it. 00:29:07.001 [2024-07-24 20:08:54.773318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.001 [2024-07-24 20:08:54.773348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.001 qpair failed and we were unable to recover it. 00:29:07.001 [2024-07-24 20:08:54.773671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.001 [2024-07-24 20:08:54.773699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.001 qpair failed and we were unable to recover it. 00:29:07.001 [2024-07-24 20:08:54.774153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.001 [2024-07-24 20:08:54.774181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.001 qpair failed and we were unable to recover it. 00:29:07.001 [2024-07-24 20:08:54.774674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.001 [2024-07-24 20:08:54.774703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.001 qpair failed and we were unable to recover it. 00:29:07.001 [2024-07-24 20:08:54.775016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.001 [2024-07-24 20:08:54.775044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.001 qpair failed and we were unable to recover it. 00:29:07.001 [2024-07-24 20:08:54.775323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.001 [2024-07-24 20:08:54.775352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.001 qpair failed and we were unable to recover it. 00:29:07.001 [2024-07-24 20:08:54.775827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.001 [2024-07-24 20:08:54.775856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.001 qpair failed and we were unable to recover it. 00:29:07.001 [2024-07-24 20:08:54.776320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.001 [2024-07-24 20:08:54.776349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.001 qpair failed and we were unable to recover it. 00:29:07.001 [2024-07-24 20:08:54.776614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.001 [2024-07-24 20:08:54.776642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.001 qpair failed and we were unable to recover it. 00:29:07.001 [2024-07-24 20:08:54.777086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.001 [2024-07-24 20:08:54.777113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.001 qpair failed and we were unable to recover it. 00:29:07.001 [2024-07-24 20:08:54.777374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.001 [2024-07-24 20:08:54.777402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.001 qpair failed and we were unable to recover it. 00:29:07.001 [2024-07-24 20:08:54.777838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.001 [2024-07-24 20:08:54.777866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.001 qpair failed and we were unable to recover it. 00:29:07.001 [2024-07-24 20:08:54.778075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.001 [2024-07-24 20:08:54.778102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.001 qpair failed and we were unable to recover it. 00:29:07.001 [2024-07-24 20:08:54.778570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.001 [2024-07-24 20:08:54.778599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.001 qpair failed and we were unable to recover it. 00:29:07.001 [2024-07-24 20:08:54.778867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.001 [2024-07-24 20:08:54.778896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.001 qpair failed and we were unable to recover it. 00:29:07.001 [2024-07-24 20:08:54.779348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.001 [2024-07-24 20:08:54.779377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.001 qpair failed and we were unable to recover it. 00:29:07.001 [2024-07-24 20:08:54.779826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.001 [2024-07-24 20:08:54.779854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.001 qpair failed and we were unable to recover it. 00:29:07.001 [2024-07-24 20:08:54.780329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.001 [2024-07-24 20:08:54.780359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.001 qpair failed and we were unable to recover it. 00:29:07.001 [2024-07-24 20:08:54.780886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.001 [2024-07-24 20:08:54.780914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.001 qpair failed and we were unable to recover it. 00:29:07.001 [2024-07-24 20:08:54.781277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.001 [2024-07-24 20:08:54.781308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.001 qpair failed and we were unable to recover it. 00:29:07.001 [2024-07-24 20:08:54.781554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.001 [2024-07-24 20:08:54.781582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.001 qpair failed and we were unable to recover it. 00:29:07.001 [2024-07-24 20:08:54.782074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.001 [2024-07-24 20:08:54.782102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.001 qpair failed and we were unable to recover it. 00:29:07.001 [2024-07-24 20:08:54.782600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.001 [2024-07-24 20:08:54.782629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.001 qpair failed and we were unable to recover it. 00:29:07.001 [2024-07-24 20:08:54.782909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.001 [2024-07-24 20:08:54.782940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.001 qpair failed and we were unable to recover it. 00:29:07.001 [2024-07-24 20:08:54.783318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.001 [2024-07-24 20:08:54.783347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.001 qpair failed and we were unable to recover it. 00:29:07.002 [2024-07-24 20:08:54.783691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.002 [2024-07-24 20:08:54.783718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.002 qpair failed and we were unable to recover it. 00:29:07.002 [2024-07-24 20:08:54.784181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.002 [2024-07-24 20:08:54.784220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.002 qpair failed and we were unable to recover it. 00:29:07.002 [2024-07-24 20:08:54.784616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.002 [2024-07-24 20:08:54.784644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.002 qpair failed and we were unable to recover it. 00:29:07.002 [2024-07-24 20:08:54.784999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.002 [2024-07-24 20:08:54.785026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.002 qpair failed and we were unable to recover it. 00:29:07.002 [2024-07-24 20:08:54.785382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.002 [2024-07-24 20:08:54.785418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.002 qpair failed and we were unable to recover it. 00:29:07.002 [2024-07-24 20:08:54.785897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.002 [2024-07-24 20:08:54.785925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.002 qpair failed and we were unable to recover it. 00:29:07.002 [2024-07-24 20:08:54.786396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.002 [2024-07-24 20:08:54.786426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.002 qpair failed and we were unable to recover it. 00:29:07.002 [2024-07-24 20:08:54.786883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.002 [2024-07-24 20:08:54.786911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.002 qpair failed and we were unable to recover it. 00:29:07.002 [2024-07-24 20:08:54.787386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.002 [2024-07-24 20:08:54.787414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.002 qpair failed and we were unable to recover it. 00:29:07.002 [2024-07-24 20:08:54.787874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.002 [2024-07-24 20:08:54.787902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.002 qpair failed and we were unable to recover it. 00:29:07.002 [2024-07-24 20:08:54.788266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.002 [2024-07-24 20:08:54.788294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.002 qpair failed and we were unable to recover it. 00:29:07.002 [2024-07-24 20:08:54.788767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.002 [2024-07-24 20:08:54.788795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.002 qpair failed and we were unable to recover it. 00:29:07.002 [2024-07-24 20:08:54.789271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.002 [2024-07-24 20:08:54.789301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.002 qpair failed and we were unable to recover it. 00:29:07.002 [2024-07-24 20:08:54.789649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.002 [2024-07-24 20:08:54.789676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.002 qpair failed and we were unable to recover it. 00:29:07.002 [2024-07-24 20:08:54.790077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.002 [2024-07-24 20:08:54.790105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.002 qpair failed and we were unable to recover it. 00:29:07.002 [2024-07-24 20:08:54.790581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.002 [2024-07-24 20:08:54.790610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.002 qpair failed and we were unable to recover it. 00:29:07.002 [2024-07-24 20:08:54.791082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.002 [2024-07-24 20:08:54.791110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.002 qpair failed and we were unable to recover it. 00:29:07.002 [2024-07-24 20:08:54.791571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.002 [2024-07-24 20:08:54.791600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.002 qpair failed and we were unable to recover it. 00:29:07.002 [2024-07-24 20:08:54.792069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.002 [2024-07-24 20:08:54.792097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.002 qpair failed and we were unable to recover it. 00:29:07.002 [2024-07-24 20:08:54.792597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.002 [2024-07-24 20:08:54.792626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.002 qpair failed and we were unable to recover it. 00:29:07.002 [2024-07-24 20:08:54.793095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.002 [2024-07-24 20:08:54.793124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.002 qpair failed and we were unable to recover it. 00:29:07.002 [2024-07-24 20:08:54.793613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.002 [2024-07-24 20:08:54.793642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.002 qpair failed and we were unable to recover it. 00:29:07.002 [2024-07-24 20:08:54.794138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.002 [2024-07-24 20:08:54.794166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.002 qpair failed and we were unable to recover it. 00:29:07.002 [2024-07-24 20:08:54.794583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.002 [2024-07-24 20:08:54.794615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.002 qpair failed and we were unable to recover it. 00:29:07.002 [2024-07-24 20:08:54.795061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.002 [2024-07-24 20:08:54.795089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.002 qpair failed and we were unable to recover it. 00:29:07.002 [2024-07-24 20:08:54.795602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.002 [2024-07-24 20:08:54.795631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.002 qpair failed and we were unable to recover it. 00:29:07.002 [2024-07-24 20:08:54.795979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.002 [2024-07-24 20:08:54.796010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.002 qpair failed and we were unable to recover it. 00:29:07.002 [2024-07-24 20:08:54.796236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.002 [2024-07-24 20:08:54.796264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.002 qpair failed and we were unable to recover it. 00:29:07.002 [2024-07-24 20:08:54.796740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.002 [2024-07-24 20:08:54.796767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.002 qpair failed and we were unable to recover it. 00:29:07.002 [2024-07-24 20:08:54.797237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.002 [2024-07-24 20:08:54.797267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.002 qpair failed and we were unable to recover it. 00:29:07.002 [2024-07-24 20:08:54.797751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.002 [2024-07-24 20:08:54.797779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.002 qpair failed and we were unable to recover it. 00:29:07.002 [2024-07-24 20:08:54.798219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.002 [2024-07-24 20:08:54.798249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.002 qpair failed and we were unable to recover it. 00:29:07.002 [2024-07-24 20:08:54.798735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.002 [2024-07-24 20:08:54.798762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.002 qpair failed and we were unable to recover it. 00:29:07.002 [2024-07-24 20:08:54.799243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.002 [2024-07-24 20:08:54.799274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.002 qpair failed and we were unable to recover it. 00:29:07.002 [2024-07-24 20:08:54.799720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.002 [2024-07-24 20:08:54.799748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.002 qpair failed and we were unable to recover it. 00:29:07.002 [2024-07-24 20:08:54.800258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.002 [2024-07-24 20:08:54.800302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.002 qpair failed and we were unable to recover it. 00:29:07.002 [2024-07-24 20:08:54.800673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.002 [2024-07-24 20:08:54.800700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.002 qpair failed and we were unable to recover it. 00:29:07.003 [2024-07-24 20:08:54.801047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.003 [2024-07-24 20:08:54.801075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.003 qpair failed and we were unable to recover it. 00:29:07.003 [2024-07-24 20:08:54.801427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.003 [2024-07-24 20:08:54.801462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.003 qpair failed and we were unable to recover it. 00:29:07.003 [2024-07-24 20:08:54.801912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.003 [2024-07-24 20:08:54.801940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.003 qpair failed and we were unable to recover it. 00:29:07.003 [2024-07-24 20:08:54.802425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.003 [2024-07-24 20:08:54.802455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.003 qpair failed and we were unable to recover it. 00:29:07.003 [2024-07-24 20:08:54.802908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.003 [2024-07-24 20:08:54.802935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.003 qpair failed and we were unable to recover it. 00:29:07.003 [2024-07-24 20:08:54.803302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.003 [2024-07-24 20:08:54.803331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.003 qpair failed and we were unable to recover it. 00:29:07.003 [2024-07-24 20:08:54.803681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.003 [2024-07-24 20:08:54.803714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.003 qpair failed and we were unable to recover it. 00:29:07.003 [2024-07-24 20:08:54.804081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.003 [2024-07-24 20:08:54.804122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.003 qpair failed and we were unable to recover it. 00:29:07.003 [2024-07-24 20:08:54.804579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.003 [2024-07-24 20:08:54.804609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.003 qpair failed and we were unable to recover it. 00:29:07.003 [2024-07-24 20:08:54.804856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.003 [2024-07-24 20:08:54.804883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.003 qpair failed and we were unable to recover it. 00:29:07.003 [2024-07-24 20:08:54.805332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.003 [2024-07-24 20:08:54.805362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.003 qpair failed and we were unable to recover it. 00:29:07.003 [2024-07-24 20:08:54.805803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.003 [2024-07-24 20:08:54.805831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.003 qpair failed and we were unable to recover it. 00:29:07.003 [2024-07-24 20:08:54.806365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.003 [2024-07-24 20:08:54.806393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.003 qpair failed and we were unable to recover it. 00:29:07.003 [2024-07-24 20:08:54.806851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.003 [2024-07-24 20:08:54.806879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.003 qpair failed and we were unable to recover it. 00:29:07.003 [2024-07-24 20:08:54.807241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.003 [2024-07-24 20:08:54.807271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.003 qpair failed and we were unable to recover it. 00:29:07.003 [2024-07-24 20:08:54.807729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.003 [2024-07-24 20:08:54.807756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.003 qpair failed and we were unable to recover it. 00:29:07.003 [2024-07-24 20:08:54.808116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.003 [2024-07-24 20:08:54.808144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.003 qpair failed and we were unable to recover it. 00:29:07.003 [2024-07-24 20:08:54.808611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.003 [2024-07-24 20:08:54.808640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.003 qpair failed and we were unable to recover it. 00:29:07.003 [2024-07-24 20:08:54.808984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.003 [2024-07-24 20:08:54.809013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.003 qpair failed and we were unable to recover it. 00:29:07.003 [2024-07-24 20:08:54.809470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.003 [2024-07-24 20:08:54.809499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.003 qpair failed and we were unable to recover it. 00:29:07.003 [2024-07-24 20:08:54.809974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.003 [2024-07-24 20:08:54.810002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.003 qpair failed and we were unable to recover it. 00:29:07.003 [2024-07-24 20:08:54.810366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.003 [2024-07-24 20:08:54.810396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.003 qpair failed and we were unable to recover it. 00:29:07.003 [2024-07-24 20:08:54.810838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.003 [2024-07-24 20:08:54.810866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.003 qpair failed and we were unable to recover it. 00:29:07.003 [2024-07-24 20:08:54.811111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.003 [2024-07-24 20:08:54.811138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.003 qpair failed and we were unable to recover it. 00:29:07.003 [2024-07-24 20:08:54.811670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.003 [2024-07-24 20:08:54.811700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.003 qpair failed and we were unable to recover it. 00:29:07.003 [2024-07-24 20:08:54.812165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.003 [2024-07-24 20:08:54.812192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.003 qpair failed and we were unable to recover it. 00:29:07.003 [2024-07-24 20:08:54.812662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.003 [2024-07-24 20:08:54.812691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.003 qpair failed and we were unable to recover it. 00:29:07.003 [2024-07-24 20:08:54.812956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.003 [2024-07-24 20:08:54.812984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.003 qpair failed and we were unable to recover it. 00:29:07.003 [2024-07-24 20:08:54.813436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.003 [2024-07-24 20:08:54.813464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.003 qpair failed and we were unable to recover it. 00:29:07.003 [2024-07-24 20:08:54.813919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.003 [2024-07-24 20:08:54.813947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.003 qpair failed and we were unable to recover it. 00:29:07.003 [2024-07-24 20:08:54.814265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.003 [2024-07-24 20:08:54.814295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.003 qpair failed and we were unable to recover it. 00:29:07.003 [2024-07-24 20:08:54.814763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.003 [2024-07-24 20:08:54.814790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.003 qpair failed and we were unable to recover it. 00:29:07.003 [2024-07-24 20:08:54.815162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.003 [2024-07-24 20:08:54.815189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.003 qpair failed and we were unable to recover it. 00:29:07.003 [2024-07-24 20:08:54.815661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.003 [2024-07-24 20:08:54.815691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.003 qpair failed and we were unable to recover it. 00:29:07.003 [2024-07-24 20:08:54.816011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.003 [2024-07-24 20:08:54.816044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.003 qpair failed and we were unable to recover it. 00:29:07.003 [2024-07-24 20:08:54.816422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.003 [2024-07-24 20:08:54.816451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.003 qpair failed and we were unable to recover it. 00:29:07.003 [2024-07-24 20:08:54.816921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.003 [2024-07-24 20:08:54.816949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.003 qpair failed and we were unable to recover it. 00:29:07.003 [2024-07-24 20:08:54.817326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.004 [2024-07-24 20:08:54.817354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.004 qpair failed and we were unable to recover it. 00:29:07.004 [2024-07-24 20:08:54.817858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.004 [2024-07-24 20:08:54.817886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.004 qpair failed and we were unable to recover it. 00:29:07.004 [2024-07-24 20:08:54.818350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.004 [2024-07-24 20:08:54.818380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.004 qpair failed and we were unable to recover it. 00:29:07.004 [2024-07-24 20:08:54.818815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.004 [2024-07-24 20:08:54.818843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.004 qpair failed and we were unable to recover it. 00:29:07.004 [2024-07-24 20:08:54.819388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.004 [2024-07-24 20:08:54.819418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.004 qpair failed and we were unable to recover it. 00:29:07.004 [2024-07-24 20:08:54.819887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.004 [2024-07-24 20:08:54.819914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.004 qpair failed and we were unable to recover it. 00:29:07.004 [2024-07-24 20:08:54.820289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.004 [2024-07-24 20:08:54.820319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.004 qpair failed and we were unable to recover it. 00:29:07.004 [2024-07-24 20:08:54.820821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.004 [2024-07-24 20:08:54.820849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.004 qpair failed and we were unable to recover it. 00:29:07.004 [2024-07-24 20:08:54.821320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.004 [2024-07-24 20:08:54.821348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.004 qpair failed and we were unable to recover it. 00:29:07.004 [2024-07-24 20:08:54.821839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.004 [2024-07-24 20:08:54.821867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.004 qpair failed and we were unable to recover it. 00:29:07.004 [2024-07-24 20:08:54.822347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.004 [2024-07-24 20:08:54.822376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.004 qpair failed and we were unable to recover it. 00:29:07.004 [2024-07-24 20:08:54.822754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.004 [2024-07-24 20:08:54.822783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.004 qpair failed and we were unable to recover it. 00:29:07.004 [2024-07-24 20:08:54.823237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.004 [2024-07-24 20:08:54.823266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.004 qpair failed and we were unable to recover it. 00:29:07.004 [2024-07-24 20:08:54.823619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.004 [2024-07-24 20:08:54.823647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.004 qpair failed and we were unable to recover it. 00:29:07.004 [2024-07-24 20:08:54.823985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.004 [2024-07-24 20:08:54.824012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.004 qpair failed and we were unable to recover it. 00:29:07.004 [2024-07-24 20:08:54.824471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.004 [2024-07-24 20:08:54.824500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.004 qpair failed and we were unable to recover it. 00:29:07.004 [2024-07-24 20:08:54.824885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.004 [2024-07-24 20:08:54.824913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.004 qpair failed and we were unable to recover it. 00:29:07.004 [2024-07-24 20:08:54.825078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.004 [2024-07-24 20:08:54.825108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.004 qpair failed and we were unable to recover it. 00:29:07.004 [2024-07-24 20:08:54.825565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.004 [2024-07-24 20:08:54.825593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.004 qpair failed and we were unable to recover it. 00:29:07.004 [2024-07-24 20:08:54.825859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.004 [2024-07-24 20:08:54.825887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.004 qpair failed and we were unable to recover it. 00:29:07.004 [2024-07-24 20:08:54.826240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.004 [2024-07-24 20:08:54.826270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.004 qpair failed and we were unable to recover it. 00:29:07.004 [2024-07-24 20:08:54.826753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.004 [2024-07-24 20:08:54.826782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.004 qpair failed and we were unable to recover it. 00:29:07.004 [2024-07-24 20:08:54.827239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.004 [2024-07-24 20:08:54.827269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.004 qpair failed and we were unable to recover it. 00:29:07.004 [2024-07-24 20:08:54.827716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.004 [2024-07-24 20:08:54.827744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.004 qpair failed and we were unable to recover it. 00:29:07.004 [2024-07-24 20:08:54.828223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.004 [2024-07-24 20:08:54.828252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.004 qpair failed and we were unable to recover it. 00:29:07.004 [2024-07-24 20:08:54.828551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.004 [2024-07-24 20:08:54.828579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.004 qpair failed and we were unable to recover it. 00:29:07.004 [2024-07-24 20:08:54.829077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.004 [2024-07-24 20:08:54.829105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.004 qpair failed and we were unable to recover it. 00:29:07.004 [2024-07-24 20:08:54.829607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.004 [2024-07-24 20:08:54.829635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.004 qpair failed and we were unable to recover it. 00:29:07.004 [2024-07-24 20:08:54.829954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.004 [2024-07-24 20:08:54.829982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.004 qpair failed and we were unable to recover it. 00:29:07.004 [2024-07-24 20:08:54.830337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.004 [2024-07-24 20:08:54.830367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.004 qpair failed and we were unable to recover it. 00:29:07.004 [2024-07-24 20:08:54.830852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.004 [2024-07-24 20:08:54.830880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.004 qpair failed and we were unable to recover it. 00:29:07.004 [2024-07-24 20:08:54.831451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.004 [2024-07-24 20:08:54.831480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.004 qpair failed and we were unable to recover it. 00:29:07.004 [2024-07-24 20:08:54.831939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.004 [2024-07-24 20:08:54.831967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.004 qpair failed and we were unable to recover it. 00:29:07.004 [2024-07-24 20:08:54.832289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.004 [2024-07-24 20:08:54.832317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.004 qpair failed and we were unable to recover it. 00:29:07.004 [2024-07-24 20:08:54.832635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.004 [2024-07-24 20:08:54.832663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.004 qpair failed and we were unable to recover it. 00:29:07.004 [2024-07-24 20:08:54.833108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.004 [2024-07-24 20:08:54.833136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.004 qpair failed and we were unable to recover it. 00:29:07.004 [2024-07-24 20:08:54.833580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.004 [2024-07-24 20:08:54.833609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.004 qpair failed and we were unable to recover it. 00:29:07.005 [2024-07-24 20:08:54.834071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.005 [2024-07-24 20:08:54.834105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.005 qpair failed and we were unable to recover it. 00:29:07.005 [2024-07-24 20:08:54.834563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.005 [2024-07-24 20:08:54.834593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.005 qpair failed and we were unable to recover it. 00:29:07.005 [2024-07-24 20:08:54.834947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.005 [2024-07-24 20:08:54.834974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.005 qpair failed and we were unable to recover it. 00:29:07.005 [2024-07-24 20:08:54.835505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.005 [2024-07-24 20:08:54.835534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.005 qpair failed and we were unable to recover it. 00:29:07.005 [2024-07-24 20:08:54.835975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.005 [2024-07-24 20:08:54.836003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.005 qpair failed and we were unable to recover it. 00:29:07.005 [2024-07-24 20:08:54.836464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.005 [2024-07-24 20:08:54.836492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.005 qpair failed and we were unable to recover it. 00:29:07.005 [2024-07-24 20:08:54.836861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.005 [2024-07-24 20:08:54.836891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.005 qpair failed and we were unable to recover it. 00:29:07.005 [2024-07-24 20:08:54.837395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.005 [2024-07-24 20:08:54.837424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.005 qpair failed and we were unable to recover it. 00:29:07.005 [2024-07-24 20:08:54.837694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.005 [2024-07-24 20:08:54.837729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.005 qpair failed and we were unable to recover it. 00:29:07.005 [2024-07-24 20:08:54.838212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.005 [2024-07-24 20:08:54.838241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.005 qpair failed and we were unable to recover it. 00:29:07.005 [2024-07-24 20:08:54.838634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.005 [2024-07-24 20:08:54.838662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.005 qpair failed and we were unable to recover it. 00:29:07.005 [2024-07-24 20:08:54.839153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.005 [2024-07-24 20:08:54.839181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.005 qpair failed and we were unable to recover it. 00:29:07.005 [2024-07-24 20:08:54.839537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.005 [2024-07-24 20:08:54.839566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.005 qpair failed and we were unable to recover it. 00:29:07.005 [2024-07-24 20:08:54.839829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.005 [2024-07-24 20:08:54.839856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.005 qpair failed and we were unable to recover it. 00:29:07.005 [2024-07-24 20:08:54.840118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.005 [2024-07-24 20:08:54.840146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.005 qpair failed and we were unable to recover it. 00:29:07.005 [2024-07-24 20:08:54.840685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.005 [2024-07-24 20:08:54.840715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.005 qpair failed and we were unable to recover it. 00:29:07.005 [2024-07-24 20:08:54.841192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.005 [2024-07-24 20:08:54.841231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.005 qpair failed and we were unable to recover it. 00:29:07.005 [2024-07-24 20:08:54.841471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.005 [2024-07-24 20:08:54.841498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.005 qpair failed and we were unable to recover it. 00:29:07.005 [2024-07-24 20:08:54.841931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.005 [2024-07-24 20:08:54.841959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.005 qpair failed and we were unable to recover it. 00:29:07.005 [2024-07-24 20:08:54.842412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.005 [2024-07-24 20:08:54.842441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.005 qpair failed and we were unable to recover it. 00:29:07.005 [2024-07-24 20:08:54.842787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.005 [2024-07-24 20:08:54.842824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.005 qpair failed and we were unable to recover it. 00:29:07.005 [2024-07-24 20:08:54.843281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.005 [2024-07-24 20:08:54.843310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.005 qpair failed and we were unable to recover it. 00:29:07.005 [2024-07-24 20:08:54.843792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.005 [2024-07-24 20:08:54.843819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.005 qpair failed and we were unable to recover it. 00:29:07.005 [2024-07-24 20:08:54.844058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.005 [2024-07-24 20:08:54.844090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.005 qpair failed and we were unable to recover it. 00:29:07.005 [2024-07-24 20:08:54.844563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.005 [2024-07-24 20:08:54.844592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.005 qpair failed and we were unable to recover it. 00:29:07.005 [2024-07-24 20:08:54.844934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.005 [2024-07-24 20:08:54.844966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.005 qpair failed and we were unable to recover it. 00:29:07.005 [2024-07-24 20:08:54.845310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.005 [2024-07-24 20:08:54.845339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.005 qpair failed and we were unable to recover it. 00:29:07.005 [2024-07-24 20:08:54.845812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.005 [2024-07-24 20:08:54.845840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.005 qpair failed and we were unable to recover it. 00:29:07.005 [2024-07-24 20:08:54.846104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.005 [2024-07-24 20:08:54.846133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.005 qpair failed and we were unable to recover it. 00:29:07.005 [2024-07-24 20:08:54.846368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.005 [2024-07-24 20:08:54.846397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.005 qpair failed and we were unable to recover it. 00:29:07.005 [2024-07-24 20:08:54.846878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.005 [2024-07-24 20:08:54.846906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.005 qpair failed and we were unable to recover it. 00:29:07.005 [2024-07-24 20:08:54.847153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.006 [2024-07-24 20:08:54.847181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.006 qpair failed and we were unable to recover it. 00:29:07.006 [2024-07-24 20:08:54.847643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.006 [2024-07-24 20:08:54.847670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.006 qpair failed and we were unable to recover it. 00:29:07.006 [2024-07-24 20:08:54.848142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.006 [2024-07-24 20:08:54.848170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.006 qpair failed and we were unable to recover it. 00:29:07.006 [2024-07-24 20:08:54.848533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.006 [2024-07-24 20:08:54.848566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.006 qpair failed and we were unable to recover it. 00:29:07.006 [2024-07-24 20:08:54.848818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.006 [2024-07-24 20:08:54.848845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.006 qpair failed and we were unable to recover it. 00:29:07.006 [2024-07-24 20:08:54.849187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.006 [2024-07-24 20:08:54.849228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.006 qpair failed and we were unable to recover it. 00:29:07.006 [2024-07-24 20:08:54.849520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.006 [2024-07-24 20:08:54.849555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.006 qpair failed and we were unable to recover it. 00:29:07.006 [2024-07-24 20:08:54.849909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.006 [2024-07-24 20:08:54.849936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.006 qpair failed and we were unable to recover it. 00:29:07.006 [2024-07-24 20:08:54.850398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.006 [2024-07-24 20:08:54.850429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.006 qpair failed and we were unable to recover it. 00:29:07.006 [2024-07-24 20:08:54.850675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.006 [2024-07-24 20:08:54.850709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.006 qpair failed and we were unable to recover it. 00:29:07.006 [2024-07-24 20:08:54.851179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.006 [2024-07-24 20:08:54.851238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.006 qpair failed and we were unable to recover it. 00:29:07.006 [2024-07-24 20:08:54.851758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.006 [2024-07-24 20:08:54.851785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.006 qpair failed and we were unable to recover it. 00:29:07.006 [2024-07-24 20:08:54.852257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.006 [2024-07-24 20:08:54.852286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.006 qpair failed and we were unable to recover it. 00:29:07.006 [2024-07-24 20:08:54.852760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.006 [2024-07-24 20:08:54.852788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.006 qpair failed and we were unable to recover it. 00:29:07.006 [2024-07-24 20:08:54.853257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.006 [2024-07-24 20:08:54.853287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.006 qpair failed and we were unable to recover it. 00:29:07.006 [2024-07-24 20:08:54.853742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.006 [2024-07-24 20:08:54.853771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.006 qpair failed and we were unable to recover it. 00:29:07.006 [2024-07-24 20:08:54.854237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.006 [2024-07-24 20:08:54.854267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.006 qpair failed and we were unable to recover it. 00:29:07.006 [2024-07-24 20:08:54.854716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.006 [2024-07-24 20:08:54.854743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.006 qpair failed and we were unable to recover it. 00:29:07.006 [2024-07-24 20:08:54.855217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.006 [2024-07-24 20:08:54.855246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.006 qpair failed and we were unable to recover it. 00:29:07.006 [2024-07-24 20:08:54.855746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.006 [2024-07-24 20:08:54.855775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.006 qpair failed and we were unable to recover it. 00:29:07.006 [2024-07-24 20:08:54.856249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.006 [2024-07-24 20:08:54.856278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.006 qpair failed and we were unable to recover it. 00:29:07.006 [2024-07-24 20:08:54.856633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.006 [2024-07-24 20:08:54.856660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.006 qpair failed and we were unable to recover it. 00:29:07.006 [2024-07-24 20:08:54.857017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.006 [2024-07-24 20:08:54.857044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.006 qpair failed and we were unable to recover it. 00:29:07.006 [2024-07-24 20:08:54.857510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.006 [2024-07-24 20:08:54.857546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.006 qpair failed and we were unable to recover it. 00:29:07.006 [2024-07-24 20:08:54.857913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.006 [2024-07-24 20:08:54.857943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.006 qpair failed and we were unable to recover it. 00:29:07.006 [2024-07-24 20:08:54.858400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.006 [2024-07-24 20:08:54.858430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.006 qpair failed and we were unable to recover it. 00:29:07.006 [2024-07-24 20:08:54.858902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.006 [2024-07-24 20:08:54.858929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.006 qpair failed and we were unable to recover it. 00:29:07.006 [2024-07-24 20:08:54.859199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.006 [2024-07-24 20:08:54.859245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.006 qpair failed and we were unable to recover it. 00:29:07.006 [2024-07-24 20:08:54.859607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.006 [2024-07-24 20:08:54.859635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.006 qpair failed and we were unable to recover it. 00:29:07.006 [2024-07-24 20:08:54.859991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.006 [2024-07-24 20:08:54.860019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.006 qpair failed and we were unable to recover it. 00:29:07.006 [2024-07-24 20:08:54.860485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.006 [2024-07-24 20:08:54.860513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.006 qpair failed and we were unable to recover it. 00:29:07.006 [2024-07-24 20:08:54.860972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.006 [2024-07-24 20:08:54.860999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.006 qpair failed and we were unable to recover it. 00:29:07.006 [2024-07-24 20:08:54.861563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.006 [2024-07-24 20:08:54.861653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.006 qpair failed and we were unable to recover it. 00:29:07.006 [2024-07-24 20:08:54.862050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.006 [2024-07-24 20:08:54.862086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.006 qpair failed and we were unable to recover it. 00:29:07.006 [2024-07-24 20:08:54.862487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.006 [2024-07-24 20:08:54.862520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.006 qpair failed and we were unable to recover it. 00:29:07.006 [2024-07-24 20:08:54.863006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.006 [2024-07-24 20:08:54.863035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.006 qpair failed and we were unable to recover it. 00:29:07.006 [2024-07-24 20:08:54.863509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.006 [2024-07-24 20:08:54.863540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.006 qpair failed and we were unable to recover it. 00:29:07.006 [2024-07-24 20:08:54.863972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.006 [2024-07-24 20:08:54.864000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.006 qpair failed and we were unable to recover it. 00:29:07.007 [2024-07-24 20:08:54.864366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.007 [2024-07-24 20:08:54.864395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.007 qpair failed and we were unable to recover it. 00:29:07.007 [2024-07-24 20:08:54.864766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.007 [2024-07-24 20:08:54.864794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.007 qpair failed and we were unable to recover it. 00:29:07.007 [2024-07-24 20:08:54.865231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.007 [2024-07-24 20:08:54.865260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.007 qpair failed and we were unable to recover it. 00:29:07.007 [2024-07-24 20:08:54.865754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.007 [2024-07-24 20:08:54.865782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.007 qpair failed and we were unable to recover it. 00:29:07.007 [2024-07-24 20:08:54.866139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.007 [2024-07-24 20:08:54.866167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.007 qpair failed and we were unable to recover it. 00:29:07.007 [2024-07-24 20:08:54.866647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.007 [2024-07-24 20:08:54.866677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.007 qpair failed and we were unable to recover it. 00:29:07.007 [2024-07-24 20:08:54.867145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.007 [2024-07-24 20:08:54.867173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.007 qpair failed and we were unable to recover it. 00:29:07.007 [2024-07-24 20:08:54.867651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.007 [2024-07-24 20:08:54.867681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.007 qpair failed and we were unable to recover it. 00:29:07.007 [2024-07-24 20:08:54.868118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.007 [2024-07-24 20:08:54.868146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.007 qpair failed and we were unable to recover it. 00:29:07.007 [2024-07-24 20:08:54.868626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.007 [2024-07-24 20:08:54.868656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.007 qpair failed and we were unable to recover it. 00:29:07.007 [2024-07-24 20:08:54.869016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.007 [2024-07-24 20:08:54.869044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.007 qpair failed and we were unable to recover it. 00:29:07.007 [2024-07-24 20:08:54.869324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.007 [2024-07-24 20:08:54.869360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.007 qpair failed and we were unable to recover it. 00:29:07.007 [2024-07-24 20:08:54.869832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.007 [2024-07-24 20:08:54.869860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.007 qpair failed and we were unable to recover it. 00:29:07.007 [2024-07-24 20:08:54.870336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.007 [2024-07-24 20:08:54.870366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.007 qpair failed and we were unable to recover it. 00:29:07.007 [2024-07-24 20:08:54.870718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.007 [2024-07-24 20:08:54.870746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.007 qpair failed and we were unable to recover it. 00:29:07.007 [2024-07-24 20:08:54.871002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.007 [2024-07-24 20:08:54.871029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.007 qpair failed and we were unable to recover it. 00:29:07.007 [2024-07-24 20:08:54.871350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.007 [2024-07-24 20:08:54.871379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.007 qpair failed and we were unable to recover it. 00:29:07.007 [2024-07-24 20:08:54.871662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.007 [2024-07-24 20:08:54.871690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.007 qpair failed and we were unable to recover it. 00:29:07.007 [2024-07-24 20:08:54.871961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.007 [2024-07-24 20:08:54.871989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.007 qpair failed and we were unable to recover it. 00:29:07.007 [2024-07-24 20:08:54.872232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.007 [2024-07-24 20:08:54.872262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.007 qpair failed and we were unable to recover it. 00:29:07.007 [2024-07-24 20:08:54.872389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.007 [2024-07-24 20:08:54.872416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.007 qpair failed and we were unable to recover it. 00:29:07.007 [2024-07-24 20:08:54.872873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.007 [2024-07-24 20:08:54.872900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.007 qpair failed and we were unable to recover it. 00:29:07.007 [2024-07-24 20:08:54.873147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.007 [2024-07-24 20:08:54.873174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.007 qpair failed and we were unable to recover it. 00:29:07.007 [2024-07-24 20:08:54.873478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.007 [2024-07-24 20:08:54.873507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.007 qpair failed and we were unable to recover it. 00:29:07.007 [2024-07-24 20:08:54.873830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.007 [2024-07-24 20:08:54.873858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.007 qpair failed and we were unable to recover it. 00:29:07.007 [2024-07-24 20:08:54.874329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.007 [2024-07-24 20:08:54.874359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.007 qpair failed and we were unable to recover it. 00:29:07.007 [2024-07-24 20:08:54.874491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.007 [2024-07-24 20:08:54.874519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.007 qpair failed and we were unable to recover it. 00:29:07.007 [2024-07-24 20:08:54.874968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.007 [2024-07-24 20:08:54.874996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.007 qpair failed and we were unable to recover it. 00:29:07.007 [2024-07-24 20:08:54.875358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.007 [2024-07-24 20:08:54.875391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.007 qpair failed and we were unable to recover it. 00:29:07.007 [2024-07-24 20:08:54.875885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.007 [2024-07-24 20:08:54.875913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.007 qpair failed and we were unable to recover it. 00:29:07.007 [2024-07-24 20:08:54.876040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.007 [2024-07-24 20:08:54.876066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.007 qpair failed and we were unable to recover it. 00:29:07.007 [2024-07-24 20:08:54.876309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.007 [2024-07-24 20:08:54.876338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.007 qpair failed and we were unable to recover it. 00:29:07.007 [2024-07-24 20:08:54.876810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.007 [2024-07-24 20:08:54.876838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.007 qpair failed and we were unable to recover it. 00:29:07.007 [2024-07-24 20:08:54.877271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.007 [2024-07-24 20:08:54.877299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.007 qpair failed and we were unable to recover it. 00:29:07.007 [2024-07-24 20:08:54.877797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.007 [2024-07-24 20:08:54.877825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.007 qpair failed and we were unable to recover it. 00:29:07.007 [2024-07-24 20:08:54.878219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.007 [2024-07-24 20:08:54.878248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.007 qpair failed and we were unable to recover it. 00:29:07.007 [2024-07-24 20:08:54.878778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.007 [2024-07-24 20:08:54.878806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.008 qpair failed and we were unable to recover it. 00:29:07.008 [2024-07-24 20:08:54.879247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.008 [2024-07-24 20:08:54.879279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.008 qpair failed and we were unable to recover it. 00:29:07.008 [2024-07-24 20:08:54.879773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.008 [2024-07-24 20:08:54.879801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.008 qpair failed and we were unable to recover it. 00:29:07.008 [2024-07-24 20:08:54.880259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.008 [2024-07-24 20:08:54.880288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.008 qpair failed and we were unable to recover it. 00:29:07.008 [2024-07-24 20:08:54.880758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.008 [2024-07-24 20:08:54.880785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.008 qpair failed and we were unable to recover it. 00:29:07.008 [2024-07-24 20:08:54.881257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.008 [2024-07-24 20:08:54.881286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.008 qpair failed and we were unable to recover it. 00:29:07.008 [2024-07-24 20:08:54.881759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.008 [2024-07-24 20:08:54.881787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.008 qpair failed and we were unable to recover it. 00:29:07.008 [2024-07-24 20:08:54.882311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.008 [2024-07-24 20:08:54.882340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.008 qpair failed and we were unable to recover it. 00:29:07.008 [2024-07-24 20:08:54.882789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.008 [2024-07-24 20:08:54.882818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.008 qpair failed and we were unable to recover it. 00:29:07.008 [2024-07-24 20:08:54.883297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.008 [2024-07-24 20:08:54.883327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.008 qpair failed and we were unable to recover it. 00:29:07.008 [2024-07-24 20:08:54.883712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.008 [2024-07-24 20:08:54.883740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.008 qpair failed and we were unable to recover it. 00:29:07.008 [2024-07-24 20:08:54.884219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.008 [2024-07-24 20:08:54.884248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.008 qpair failed and we were unable to recover it. 00:29:07.008 [2024-07-24 20:08:54.884726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.008 [2024-07-24 20:08:54.884754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.008 qpair failed and we were unable to recover it. 00:29:07.008 [2024-07-24 20:08:54.885228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.008 [2024-07-24 20:08:54.885257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.008 qpair failed and we were unable to recover it. 00:29:07.008 [2024-07-24 20:08:54.885726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.008 [2024-07-24 20:08:54.885753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.008 qpair failed and we were unable to recover it. 00:29:07.008 [2024-07-24 20:08:54.886281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.008 [2024-07-24 20:08:54.886315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.008 qpair failed and we were unable to recover it. 00:29:07.008 [2024-07-24 20:08:54.886647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.008 [2024-07-24 20:08:54.886674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.008 qpair failed and we were unable to recover it. 00:29:07.008 [2024-07-24 20:08:54.887161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.008 [2024-07-24 20:08:54.887189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.008 qpair failed and we were unable to recover it. 00:29:07.008 [2024-07-24 20:08:54.887701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.008 [2024-07-24 20:08:54.887731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.008 qpair failed and we were unable to recover it. 00:29:07.008 [2024-07-24 20:08:54.888074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.008 [2024-07-24 20:08:54.888101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.008 qpair failed and we were unable to recover it. 00:29:07.008 [2024-07-24 20:08:54.888557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.008 [2024-07-24 20:08:54.888586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.008 qpair failed and we were unable to recover it. 00:29:07.008 [2024-07-24 20:08:54.888838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.008 [2024-07-24 20:08:54.888865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.008 qpair failed and we were unable to recover it. 00:29:07.008 [2024-07-24 20:08:54.889319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.008 [2024-07-24 20:08:54.889349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.008 qpair failed and we were unable to recover it. 00:29:07.008 [2024-07-24 20:08:54.889816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.008 [2024-07-24 20:08:54.889844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.008 qpair failed and we were unable to recover it. 00:29:07.008 [2024-07-24 20:08:54.890264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.008 [2024-07-24 20:08:54.890292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.008 qpair failed and we were unable to recover it. 00:29:07.008 [2024-07-24 20:08:54.890558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.008 [2024-07-24 20:08:54.890584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.008 qpair failed and we were unable to recover it. 00:29:07.008 [2024-07-24 20:08:54.891051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.008 [2024-07-24 20:08:54.891079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.008 qpair failed and we were unable to recover it. 00:29:07.008 [2024-07-24 20:08:54.891542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.008 [2024-07-24 20:08:54.891570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.008 qpair failed and we were unable to recover it. 00:29:07.008 [2024-07-24 20:08:54.892050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.008 [2024-07-24 20:08:54.892079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.008 qpair failed and we were unable to recover it. 00:29:07.008 [2024-07-24 20:08:54.892542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.008 [2024-07-24 20:08:54.892572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.008 qpair failed and we were unable to recover it. 00:29:07.008 [2024-07-24 20:08:54.893033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.008 [2024-07-24 20:08:54.893062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.008 qpair failed and we were unable to recover it. 00:29:07.008 [2024-07-24 20:08:54.893337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.008 [2024-07-24 20:08:54.893366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.008 qpair failed and we were unable to recover it. 00:29:07.008 [2024-07-24 20:08:54.893609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.008 [2024-07-24 20:08:54.893637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.008 qpair failed and we were unable to recover it. 00:29:07.008 [2024-07-24 20:08:54.894103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.008 [2024-07-24 20:08:54.894132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.008 qpair failed and we were unable to recover it. 00:29:07.008 [2024-07-24 20:08:54.894502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.008 [2024-07-24 20:08:54.894533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.008 qpair failed and we were unable to recover it. 00:29:07.008 [2024-07-24 20:08:54.894893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.008 [2024-07-24 20:08:54.894932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.008 qpair failed and we were unable to recover it. 00:29:07.008 [2024-07-24 20:08:54.895430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.008 [2024-07-24 20:08:54.895460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.008 qpair failed and we were unable to recover it. 00:29:07.008 [2024-07-24 20:08:54.895937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.008 [2024-07-24 20:08:54.895965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.009 qpair failed and we were unable to recover it. 00:29:07.009 [2024-07-24 20:08:54.896195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.009 [2024-07-24 20:08:54.896250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.009 qpair failed and we were unable to recover it. 00:29:07.009 [2024-07-24 20:08:54.896610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.009 [2024-07-24 20:08:54.896642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.009 qpair failed and we were unable to recover it. 00:29:07.009 [2024-07-24 20:08:54.896963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.009 [2024-07-24 20:08:54.896994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.009 qpair failed and we were unable to recover it. 00:29:07.009 [2024-07-24 20:08:54.897439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.009 [2024-07-24 20:08:54.897471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.009 qpair failed and we were unable to recover it. 00:29:07.009 [2024-07-24 20:08:54.897928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.009 [2024-07-24 20:08:54.897958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.009 qpair failed and we were unable to recover it. 00:29:07.009 [2024-07-24 20:08:54.898309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.009 [2024-07-24 20:08:54.898343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.009 qpair failed and we were unable to recover it. 00:29:07.009 [2024-07-24 20:08:54.898813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.009 [2024-07-24 20:08:54.898842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.009 qpair failed and we were unable to recover it. 00:29:07.009 [2024-07-24 20:08:54.899320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.009 [2024-07-24 20:08:54.899349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.009 qpair failed and we were unable to recover it. 00:29:07.009 [2024-07-24 20:08:54.899612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.009 [2024-07-24 20:08:54.899651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.009 qpair failed and we were unable to recover it. 00:29:07.009 [2024-07-24 20:08:54.899908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.009 [2024-07-24 20:08:54.899937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.009 qpair failed and we were unable to recover it. 00:29:07.009 [2024-07-24 20:08:54.900293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.009 [2024-07-24 20:08:54.900325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.009 qpair failed and we were unable to recover it. 00:29:07.009 [2024-07-24 20:08:54.900808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.009 [2024-07-24 20:08:54.900837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.009 qpair failed and we were unable to recover it. 00:29:07.009 [2024-07-24 20:08:54.901211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.009 [2024-07-24 20:08:54.901245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.009 qpair failed and we were unable to recover it. 00:29:07.009 [2024-07-24 20:08:54.901625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.009 [2024-07-24 20:08:54.901659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.009 qpair failed and we were unable to recover it. 00:29:07.009 [2024-07-24 20:08:54.902112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.009 [2024-07-24 20:08:54.902140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.009 qpair failed and we were unable to recover it. 00:29:07.009 [2024-07-24 20:08:54.902626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.009 [2024-07-24 20:08:54.902657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.009 qpair failed and we were unable to recover it. 00:29:07.009 [2024-07-24 20:08:54.903112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.009 [2024-07-24 20:08:54.903141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.009 qpair failed and we were unable to recover it. 00:29:07.009 [2024-07-24 20:08:54.903618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.009 [2024-07-24 20:08:54.903655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.009 qpair failed and we were unable to recover it. 00:29:07.009 [2024-07-24 20:08:54.904105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.009 [2024-07-24 20:08:54.904135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.009 qpair failed and we were unable to recover it. 00:29:07.009 [2024-07-24 20:08:54.904612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.009 [2024-07-24 20:08:54.904641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.009 qpair failed and we were unable to recover it. 00:29:07.009 [2024-07-24 20:08:54.905103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.009 [2024-07-24 20:08:54.905133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.009 qpair failed and we were unable to recover it. 00:29:07.009 [2024-07-24 20:08:54.905616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.009 [2024-07-24 20:08:54.905645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.009 qpair failed and we were unable to recover it. 00:29:07.009 [2024-07-24 20:08:54.906124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.009 [2024-07-24 20:08:54.906152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.009 qpair failed and we were unable to recover it. 00:29:07.009 [2024-07-24 20:08:54.906289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.009 [2024-07-24 20:08:54.906317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.009 qpair failed and we were unable to recover it. 00:29:07.009 [2024-07-24 20:08:54.906682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.009 [2024-07-24 20:08:54.906710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.009 qpair failed and we were unable to recover it. 00:29:07.009 [2024-07-24 20:08:54.906964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.009 [2024-07-24 20:08:54.906991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.009 qpair failed and we were unable to recover it. 00:29:07.009 [2024-07-24 20:08:54.907256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.009 [2024-07-24 20:08:54.907286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.009 qpair failed and we were unable to recover it. 00:29:07.009 [2024-07-24 20:08:54.907792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.009 [2024-07-24 20:08:54.907822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.009 qpair failed and we were unable to recover it. 00:29:07.009 [2024-07-24 20:08:54.908144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.009 [2024-07-24 20:08:54.908172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.009 qpair failed and we were unable to recover it. 00:29:07.009 [2024-07-24 20:08:54.908656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.009 [2024-07-24 20:08:54.908686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.009 qpair failed and we were unable to recover it. 00:29:07.009 [2024-07-24 20:08:54.909149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.009 [2024-07-24 20:08:54.909180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.009 qpair failed and we were unable to recover it. 00:29:07.009 [2024-07-24 20:08:54.909679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.009 [2024-07-24 20:08:54.909709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.009 qpair failed and we were unable to recover it. 00:29:07.009 [2024-07-24 20:08:54.910177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.009 [2024-07-24 20:08:54.910215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.009 qpair failed and we were unable to recover it. 00:29:07.009 [2024-07-24 20:08:54.910690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.009 [2024-07-24 20:08:54.910719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.009 qpair failed and we were unable to recover it. 00:29:07.009 [2024-07-24 20:08:54.911182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.009 [2024-07-24 20:08:54.911220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.009 qpair failed and we were unable to recover it. 00:29:07.009 [2024-07-24 20:08:54.911672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.009 [2024-07-24 20:08:54.911701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.009 qpair failed and we were unable to recover it. 00:29:07.009 [2024-07-24 20:08:54.912167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.009 [2024-07-24 20:08:54.912195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.009 qpair failed and we were unable to recover it. 00:29:07.009 [2024-07-24 20:08:54.912655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.010 [2024-07-24 20:08:54.912685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.010 qpair failed and we were unable to recover it. 00:29:07.010 [2024-07-24 20:08:54.913158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.010 [2024-07-24 20:08:54.913187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.010 qpair failed and we were unable to recover it. 00:29:07.010 [2024-07-24 20:08:54.913673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.010 [2024-07-24 20:08:54.913702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.010 qpair failed and we were unable to recover it. 00:29:07.010 [2024-07-24 20:08:54.914240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.010 [2024-07-24 20:08:54.914271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.010 qpair failed and we were unable to recover it. 00:29:07.010 [2024-07-24 20:08:54.914750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.010 [2024-07-24 20:08:54.914779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.010 qpair failed and we were unable to recover it. 00:29:07.010 [2024-07-24 20:08:54.915440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.010 [2024-07-24 20:08:54.915533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.010 qpair failed and we were unable to recover it. 00:29:07.010 [2024-07-24 20:08:54.915889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.010 [2024-07-24 20:08:54.915925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.010 qpair failed and we were unable to recover it. 00:29:07.010 [2024-07-24 20:08:54.916179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.010 [2024-07-24 20:08:54.916225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.010 qpair failed and we were unable to recover it. 00:29:07.010 [2024-07-24 20:08:54.916717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.010 [2024-07-24 20:08:54.916748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.010 qpair failed and we were unable to recover it. 00:29:07.010 [2024-07-24 20:08:54.917217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.010 [2024-07-24 20:08:54.917248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.010 qpair failed and we were unable to recover it. 00:29:07.010 [2024-07-24 20:08:54.917739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.010 [2024-07-24 20:08:54.917769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.010 qpair failed and we were unable to recover it. 00:29:07.010 [2024-07-24 20:08:54.918013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.010 [2024-07-24 20:08:54.918041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.010 qpair failed and we were unable to recover it. 00:29:07.010 [2024-07-24 20:08:54.918518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.010 [2024-07-24 20:08:54.918551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.010 qpair failed and we were unable to recover it. 00:29:07.010 [2024-07-24 20:08:54.918815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.010 [2024-07-24 20:08:54.918843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.010 qpair failed and we were unable to recover it. 00:29:07.010 [2024-07-24 20:08:54.919321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.010 [2024-07-24 20:08:54.919351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.010 qpair failed and we were unable to recover it. 00:29:07.010 [2024-07-24 20:08:54.919823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.010 [2024-07-24 20:08:54.919852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.010 qpair failed and we were unable to recover it. 00:29:07.010 [2024-07-24 20:08:54.920280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.010 [2024-07-24 20:08:54.920310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.010 qpair failed and we were unable to recover it. 00:29:07.010 [2024-07-24 20:08:54.920779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.010 [2024-07-24 20:08:54.920808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.010 qpair failed and we were unable to recover it. 00:29:07.010 [2024-07-24 20:08:54.921279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.010 [2024-07-24 20:08:54.921309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.010 qpair failed and we were unable to recover it. 00:29:07.010 [2024-07-24 20:08:54.921771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.010 [2024-07-24 20:08:54.921801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.010 qpair failed and we were unable to recover it. 00:29:07.010 [2024-07-24 20:08:54.922278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.010 [2024-07-24 20:08:54.922315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.010 qpair failed and we were unable to recover it. 00:29:07.010 [2024-07-24 20:08:54.922771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.010 [2024-07-24 20:08:54.922800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.010 qpair failed and we were unable to recover it. 00:29:07.010 [2024-07-24 20:08:54.923284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.010 [2024-07-24 20:08:54.923314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.010 qpair failed and we were unable to recover it. 00:29:07.010 [2024-07-24 20:08:54.923798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.010 [2024-07-24 20:08:54.923827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.010 qpair failed and we were unable to recover it. 00:29:07.010 [2024-07-24 20:08:54.924081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.010 [2024-07-24 20:08:54.924110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.010 qpair failed and we were unable to recover it. 00:29:07.010 [2024-07-24 20:08:54.924472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.010 [2024-07-24 20:08:54.924501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.010 qpair failed and we were unable to recover it. 00:29:07.010 [2024-07-24 20:08:54.924782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.010 [2024-07-24 20:08:54.924810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.010 qpair failed and we were unable to recover it. 00:29:07.010 [2024-07-24 20:08:54.925286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.010 [2024-07-24 20:08:54.925316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.010 qpair failed and we were unable to recover it. 00:29:07.010 [2024-07-24 20:08:54.925840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.010 [2024-07-24 20:08:54.925869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.010 qpair failed and we were unable to recover it. 00:29:07.010 [2024-07-24 20:08:54.926188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.010 [2024-07-24 20:08:54.926228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.010 qpair failed and we were unable to recover it. 00:29:07.010 [2024-07-24 20:08:54.926697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.010 [2024-07-24 20:08:54.926726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.010 qpair failed and we were unable to recover it. 00:29:07.010 [2024-07-24 20:08:54.926972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.011 [2024-07-24 20:08:54.927001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.011 qpair failed and we were unable to recover it. 00:29:07.011 [2024-07-24 20:08:54.927490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.011 [2024-07-24 20:08:54.927520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.011 qpair failed and we were unable to recover it. 00:29:07.011 [2024-07-24 20:08:54.927992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.011 [2024-07-24 20:08:54.928021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.011 qpair failed and we were unable to recover it. 00:29:07.011 [2024-07-24 20:08:54.928501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.011 [2024-07-24 20:08:54.928531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.011 qpair failed and we were unable to recover it. 00:29:07.011 [2024-07-24 20:08:54.929000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.011 [2024-07-24 20:08:54.929032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.011 qpair failed and we were unable to recover it. 00:29:07.011 [2024-07-24 20:08:54.929613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.011 [2024-07-24 20:08:54.929706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.011 qpair failed and we were unable to recover it. 00:29:07.011 [2024-07-24 20:08:54.930241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.011 [2024-07-24 20:08:54.930284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.011 qpair failed and we were unable to recover it. 00:29:07.011 [2024-07-24 20:08:54.930779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.011 [2024-07-24 20:08:54.930810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.011 qpair failed and we were unable to recover it. 00:29:07.011 [2024-07-24 20:08:54.931188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.011 [2024-07-24 20:08:54.931233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.011 qpair failed and we were unable to recover it. 00:29:07.011 [2024-07-24 20:08:54.931696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.011 [2024-07-24 20:08:54.931726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.011 qpair failed and we were unable to recover it. 00:29:07.281 [2024-07-24 20:08:54.932197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.281 [2024-07-24 20:08:54.932239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.281 qpair failed and we were unable to recover it. 00:29:07.281 [2024-07-24 20:08:54.932745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.281 [2024-07-24 20:08:54.932775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.281 qpair failed and we were unable to recover it. 00:29:07.281 [2024-07-24 20:08:54.933264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.281 [2024-07-24 20:08:54.933311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.281 qpair failed and we were unable to recover it. 00:29:07.281 [2024-07-24 20:08:54.933796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.281 [2024-07-24 20:08:54.933825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.281 qpair failed and we were unable to recover it. 00:29:07.281 [2024-07-24 20:08:54.934398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.281 [2024-07-24 20:08:54.934492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.281 qpair failed and we were unable to recover it. 00:29:07.281 [2024-07-24 20:08:54.934883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.281 [2024-07-24 20:08:54.934920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.281 qpair failed and we were unable to recover it. 00:29:07.281 [2024-07-24 20:08:54.935187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.281 [2024-07-24 20:08:54.935246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.281 qpair failed and we were unable to recover it. 00:29:07.281 [2024-07-24 20:08:54.935727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.281 [2024-07-24 20:08:54.935757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.281 qpair failed and we were unable to recover it. 00:29:07.281 [2024-07-24 20:08:54.936223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.281 [2024-07-24 20:08:54.936255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.281 qpair failed and we were unable to recover it. 00:29:07.281 [2024-07-24 20:08:54.936627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.281 [2024-07-24 20:08:54.936657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.281 qpair failed and we were unable to recover it. 00:29:07.282 [2024-07-24 20:08:54.937162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.282 [2024-07-24 20:08:54.937193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.282 qpair failed and we were unable to recover it. 00:29:07.282 [2024-07-24 20:08:54.937455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.282 [2024-07-24 20:08:54.937485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.282 qpair failed and we were unable to recover it. 00:29:07.282 [2024-07-24 20:08:54.937974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.282 [2024-07-24 20:08:54.938004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.282 qpair failed and we were unable to recover it. 00:29:07.282 [2024-07-24 20:08:54.938473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.282 [2024-07-24 20:08:54.938504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.282 qpair failed and we were unable to recover it. 00:29:07.282 [2024-07-24 20:08:54.938971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.282 [2024-07-24 20:08:54.939001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.282 qpair failed and we were unable to recover it. 00:29:07.282 [2024-07-24 20:08:54.939362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.282 [2024-07-24 20:08:54.939394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.282 qpair failed and we were unable to recover it. 00:29:07.282 [2024-07-24 20:08:54.939868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.282 [2024-07-24 20:08:54.939896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.282 qpair failed and we were unable to recover it. 00:29:07.282 [2024-07-24 20:08:54.940375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.282 [2024-07-24 20:08:54.940406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.282 qpair failed and we were unable to recover it. 00:29:07.282 [2024-07-24 20:08:54.940877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.282 [2024-07-24 20:08:54.940906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.282 qpair failed and we were unable to recover it. 00:29:07.282 [2024-07-24 20:08:54.941277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.282 [2024-07-24 20:08:54.941313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.282 qpair failed and we were unable to recover it. 00:29:07.282 [2024-07-24 20:08:54.941795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.282 [2024-07-24 20:08:54.941824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.282 qpair failed and we were unable to recover it. 00:29:07.282 [2024-07-24 20:08:54.942300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.282 [2024-07-24 20:08:54.942331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.282 qpair failed and we were unable to recover it. 00:29:07.282 [2024-07-24 20:08:54.942812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.282 [2024-07-24 20:08:54.942841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.282 qpair failed and we were unable to recover it. 00:29:07.282 [2024-07-24 20:08:54.943327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.282 [2024-07-24 20:08:54.943357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.282 qpair failed and we were unable to recover it. 00:29:07.282 [2024-07-24 20:08:54.943828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.282 [2024-07-24 20:08:54.943857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.282 qpair failed and we were unable to recover it. 00:29:07.282 [2024-07-24 20:08:54.944103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.282 [2024-07-24 20:08:54.944133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.282 qpair failed and we were unable to recover it. 00:29:07.282 [2024-07-24 20:08:54.944594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.282 [2024-07-24 20:08:54.944623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.282 qpair failed and we were unable to recover it. 00:29:07.282 [2024-07-24 20:08:54.945069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.282 [2024-07-24 20:08:54.945098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.282 qpair failed and we were unable to recover it. 00:29:07.282 [2024-07-24 20:08:54.945465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.282 [2024-07-24 20:08:54.945506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.282 qpair failed and we were unable to recover it. 00:29:07.282 [2024-07-24 20:08:54.945986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.282 [2024-07-24 20:08:54.946015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.282 qpair failed and we were unable to recover it. 00:29:07.282 [2024-07-24 20:08:54.946146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.282 [2024-07-24 20:08:54.946175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.282 qpair failed and we were unable to recover it. 00:29:07.282 [2024-07-24 20:08:54.946654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.282 [2024-07-24 20:08:54.946685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.282 qpair failed and we were unable to recover it. 00:29:07.282 [2024-07-24 20:08:54.947162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.282 [2024-07-24 20:08:54.947192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.282 qpair failed and we were unable to recover it. 00:29:07.282 [2024-07-24 20:08:54.947702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.282 [2024-07-24 20:08:54.947733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.282 qpair failed and we were unable to recover it. 00:29:07.282 [2024-07-24 20:08:54.948179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.282 [2024-07-24 20:08:54.948219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.282 qpair failed and we were unable to recover it. 00:29:07.282 [2024-07-24 20:08:54.948479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.282 [2024-07-24 20:08:54.948514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.282 qpair failed and we were unable to recover it. 00:29:07.282 [2024-07-24 20:08:54.948749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.282 [2024-07-24 20:08:54.948778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.282 qpair failed and we were unable to recover it. 00:29:07.282 [2024-07-24 20:08:54.949138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.282 [2024-07-24 20:08:54.949167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.282 qpair failed and we were unable to recover it. 00:29:07.282 [2024-07-24 20:08:54.949636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.282 [2024-07-24 20:08:54.949667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.282 qpair failed and we were unable to recover it. 00:29:07.282 [2024-07-24 20:08:54.950140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.282 [2024-07-24 20:08:54.950170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.282 qpair failed and we were unable to recover it. 00:29:07.282 [2024-07-24 20:08:54.950522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.282 [2024-07-24 20:08:54.950558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.282 qpair failed and we were unable to recover it. 00:29:07.282 [2024-07-24 20:08:54.951022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.282 [2024-07-24 20:08:54.951051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.282 qpair failed and we were unable to recover it. 00:29:07.282 [2024-07-24 20:08:54.951548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.282 [2024-07-24 20:08:54.951579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.282 qpair failed and we were unable to recover it. 00:29:07.282 [2024-07-24 20:08:54.952057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.282 [2024-07-24 20:08:54.952087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.282 qpair failed and we were unable to recover it. 00:29:07.282 [2024-07-24 20:08:54.952572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.283 [2024-07-24 20:08:54.952602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.283 qpair failed and we were unable to recover it. 00:29:07.283 [2024-07-24 20:08:54.953096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.283 [2024-07-24 20:08:54.953125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.283 qpair failed and we were unable to recover it. 00:29:07.283 [2024-07-24 20:08:54.953598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.283 [2024-07-24 20:08:54.953629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.283 qpair failed and we were unable to recover it. 00:29:07.283 [2024-07-24 20:08:54.954031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.283 [2024-07-24 20:08:54.954060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.283 qpair failed and we were unable to recover it. 00:29:07.283 [2024-07-24 20:08:54.954566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.283 [2024-07-24 20:08:54.954595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.283 qpair failed and we were unable to recover it. 00:29:07.283 [2024-07-24 20:08:54.955075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.283 [2024-07-24 20:08:54.955103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.283 qpair failed and we were unable to recover it. 00:29:07.283 [2024-07-24 20:08:54.955535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.283 [2024-07-24 20:08:54.955566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.283 qpair failed and we were unable to recover it. 00:29:07.283 [2024-07-24 20:08:54.956001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.283 [2024-07-24 20:08:54.956030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.283 qpair failed and we were unable to recover it. 00:29:07.283 [2024-07-24 20:08:54.956492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.283 [2024-07-24 20:08:54.956522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.283 qpair failed and we were unable to recover it. 00:29:07.283 [2024-07-24 20:08:54.957008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.283 [2024-07-24 20:08:54.957037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.283 qpair failed and we were unable to recover it. 00:29:07.283 [2024-07-24 20:08:54.957589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.283 [2024-07-24 20:08:54.957685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.283 qpair failed and we were unable to recover it. 00:29:07.283 [2024-07-24 20:08:54.958227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.283 [2024-07-24 20:08:54.958265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.283 qpair failed and we were unable to recover it. 00:29:07.283 [2024-07-24 20:08:54.958727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.283 [2024-07-24 20:08:54.958759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.283 qpair failed and we were unable to recover it. 00:29:07.283 [2024-07-24 20:08:54.959117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.283 [2024-07-24 20:08:54.959147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.283 qpair failed and we were unable to recover it. 00:29:07.283 [2024-07-24 20:08:54.959415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.283 [2024-07-24 20:08:54.959447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.283 qpair failed and we were unable to recover it. 00:29:07.283 [2024-07-24 20:08:54.959957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.283 [2024-07-24 20:08:54.959997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.283 qpair failed and we were unable to recover it. 00:29:07.283 [2024-07-24 20:08:54.960461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.283 [2024-07-24 20:08:54.960494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.283 qpair failed and we were unable to recover it. 00:29:07.283 [2024-07-24 20:08:54.960983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.283 [2024-07-24 20:08:54.961014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.283 qpair failed and we were unable to recover it. 00:29:07.283 [2024-07-24 20:08:54.961571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.283 [2024-07-24 20:08:54.961664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.283 qpair failed and we were unable to recover it. 00:29:07.283 [2024-07-24 20:08:54.962196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.283 [2024-07-24 20:08:54.962249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.283 qpair failed and we were unable to recover it. 00:29:07.283 [2024-07-24 20:08:54.962580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.283 [2024-07-24 20:08:54.962611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.283 qpair failed and we were unable to recover it. 00:29:07.283 [2024-07-24 20:08:54.963110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.283 [2024-07-24 20:08:54.963139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.283 qpair failed and we were unable to recover it. 00:29:07.283 [2024-07-24 20:08:54.963587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.283 [2024-07-24 20:08:54.963618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.283 qpair failed and we were unable to recover it. 00:29:07.283 [2024-07-24 20:08:54.963999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.283 [2024-07-24 20:08:54.964028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.283 qpair failed and we were unable to recover it. 00:29:07.283 [2024-07-24 20:08:54.964576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.283 [2024-07-24 20:08:54.964670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.283 qpair failed and we were unable to recover it. 00:29:07.283 [2024-07-24 20:08:54.965198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.283 [2024-07-24 20:08:54.965262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.283 qpair failed and we were unable to recover it. 00:29:07.283 [2024-07-24 20:08:54.965724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.283 [2024-07-24 20:08:54.965755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.283 qpair failed and we were unable to recover it. 00:29:07.283 [2024-07-24 20:08:54.966262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.283 [2024-07-24 20:08:54.966309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.283 qpair failed and we were unable to recover it. 00:29:07.283 [2024-07-24 20:08:54.966698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.283 [2024-07-24 20:08:54.966728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.283 qpair failed and we were unable to recover it. 00:29:07.283 [2024-07-24 20:08:54.967221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.283 [2024-07-24 20:08:54.967253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.283 qpair failed and we were unable to recover it. 00:29:07.283 [2024-07-24 20:08:54.967726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.283 [2024-07-24 20:08:54.967755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.283 qpair failed and we were unable to recover it. 00:29:07.283 [2024-07-24 20:08:54.968448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.283 [2024-07-24 20:08:54.968542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.283 qpair failed and we were unable to recover it. 00:29:07.284 [2024-07-24 20:08:54.969072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.284 [2024-07-24 20:08:54.969109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.284 qpair failed and we were unable to recover it. 00:29:07.284 [2024-07-24 20:08:54.969567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.284 [2024-07-24 20:08:54.969598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.284 qpair failed and we were unable to recover it. 00:29:07.284 [2024-07-24 20:08:54.970069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.284 [2024-07-24 20:08:54.970100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.284 qpair failed and we were unable to recover it. 00:29:07.284 [2024-07-24 20:08:54.970250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.284 [2024-07-24 20:08:54.970280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.284 qpair failed and we were unable to recover it. 00:29:07.284 [2024-07-24 20:08:54.970542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.284 [2024-07-24 20:08:54.970572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.284 qpair failed and we were unable to recover it. 00:29:07.284 [2024-07-24 20:08:54.971037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.284 [2024-07-24 20:08:54.971066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.284 qpair failed and we were unable to recover it. 00:29:07.284 [2024-07-24 20:08:54.971551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.284 [2024-07-24 20:08:54.971581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.284 qpair failed and we were unable to recover it. 00:29:07.284 [2024-07-24 20:08:54.972077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.284 [2024-07-24 20:08:54.972106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.284 qpair failed and we were unable to recover it. 00:29:07.284 [2024-07-24 20:08:54.972360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.284 [2024-07-24 20:08:54.972390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.284 qpair failed and we were unable to recover it. 00:29:07.284 [2024-07-24 20:08:54.972916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.284 [2024-07-24 20:08:54.972951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.284 qpair failed and we were unable to recover it. 00:29:07.284 [2024-07-24 20:08:54.973411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.284 [2024-07-24 20:08:54.973443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.284 qpair failed and we were unable to recover it. 00:29:07.284 [2024-07-24 20:08:54.973895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.284 [2024-07-24 20:08:54.973923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.284 qpair failed and we were unable to recover it. 00:29:07.284 [2024-07-24 20:08:54.974397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.284 [2024-07-24 20:08:54.974428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.284 qpair failed and we were unable to recover it. 00:29:07.284 [2024-07-24 20:08:54.974677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.284 [2024-07-24 20:08:54.974707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.284 qpair failed and we were unable to recover it. 00:29:07.284 [2024-07-24 20:08:54.975193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.284 [2024-07-24 20:08:54.975231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.284 qpair failed and we were unable to recover it. 00:29:07.284 [2024-07-24 20:08:54.975747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.284 [2024-07-24 20:08:54.975776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.284 qpair failed and we were unable to recover it. 00:29:07.284 [2024-07-24 20:08:54.976028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.284 [2024-07-24 20:08:54.976055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.284 qpair failed and we were unable to recover it. 00:29:07.284 [2024-07-24 20:08:54.976331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.284 [2024-07-24 20:08:54.976361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.284 qpair failed and we were unable to recover it. 00:29:07.284 [2024-07-24 20:08:54.976841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.284 [2024-07-24 20:08:54.976869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.284 qpair failed and we were unable to recover it. 00:29:07.284 [2024-07-24 20:08:54.977329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.284 [2024-07-24 20:08:54.977358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.284 qpair failed and we were unable to recover it. 00:29:07.284 [2024-07-24 20:08:54.977590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.284 [2024-07-24 20:08:54.977618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.284 qpair failed and we were unable to recover it. 00:29:07.284 [2024-07-24 20:08:54.978059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.284 [2024-07-24 20:08:54.978087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.284 qpair failed and we were unable to recover it. 00:29:07.284 [2024-07-24 20:08:54.978572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.284 [2024-07-24 20:08:54.978601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.284 qpair failed and we were unable to recover it. 00:29:07.284 [2024-07-24 20:08:54.978850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.284 [2024-07-24 20:08:54.978884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.284 qpair failed and we were unable to recover it. 00:29:07.284 [2024-07-24 20:08:54.979137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.284 [2024-07-24 20:08:54.979164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.284 qpair failed and we were unable to recover it. 00:29:07.284 [2024-07-24 20:08:54.979658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.284 [2024-07-24 20:08:54.979687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.284 qpair failed and we were unable to recover it. 00:29:07.284 [2024-07-24 20:08:54.979954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.284 [2024-07-24 20:08:54.979983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.284 qpair failed and we were unable to recover it. 00:29:07.284 [2024-07-24 20:08:54.980443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.284 [2024-07-24 20:08:54.980473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.284 qpair failed and we were unable to recover it. 00:29:07.284 [2024-07-24 20:08:54.980944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.284 [2024-07-24 20:08:54.980972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.284 qpair failed and we were unable to recover it. 00:29:07.284 [2024-07-24 20:08:54.981225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.284 [2024-07-24 20:08:54.981254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.284 qpair failed and we were unable to recover it. 00:29:07.284 [2024-07-24 20:08:54.981720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.284 [2024-07-24 20:08:54.981748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.284 qpair failed and we were unable to recover it. 00:29:07.284 [2024-07-24 20:08:54.982069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.284 [2024-07-24 20:08:54.982096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.284 qpair failed and we were unable to recover it. 00:29:07.284 [2024-07-24 20:08:54.982463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.284 [2024-07-24 20:08:54.982493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.284 qpair failed and we were unable to recover it. 00:29:07.284 [2024-07-24 20:08:54.982957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.284 [2024-07-24 20:08:54.982985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.284 qpair failed and we were unable to recover it. 00:29:07.284 [2024-07-24 20:08:54.983456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.284 [2024-07-24 20:08:54.983483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.284 qpair failed and we were unable to recover it. 00:29:07.284 [2024-07-24 20:08:54.983841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.284 [2024-07-24 20:08:54.983868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.284 qpair failed and we were unable to recover it. 00:29:07.285 [2024-07-24 20:08:54.984332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.285 [2024-07-24 20:08:54.984361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.285 qpair failed and we were unable to recover it. 00:29:07.285 [2024-07-24 20:08:54.984737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.285 [2024-07-24 20:08:54.984765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.285 qpair failed and we were unable to recover it. 00:29:07.285 [2024-07-24 20:08:54.985270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.285 [2024-07-24 20:08:54.985299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.285 qpair failed and we were unable to recover it. 00:29:07.285 [2024-07-24 20:08:54.985761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.285 [2024-07-24 20:08:54.985788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.285 qpair failed and we were unable to recover it. 00:29:07.285 [2024-07-24 20:08:54.986170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.285 [2024-07-24 20:08:54.986198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.285 qpair failed and we were unable to recover it. 00:29:07.285 [2024-07-24 20:08:54.986672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.285 [2024-07-24 20:08:54.986700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.285 qpair failed and we were unable to recover it. 00:29:07.285 [2024-07-24 20:08:54.987018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.285 [2024-07-24 20:08:54.987045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.285 qpair failed and we were unable to recover it. 00:29:07.285 [2024-07-24 20:08:54.987504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.285 [2024-07-24 20:08:54.987533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.285 qpair failed and we were unable to recover it. 00:29:07.285 [2024-07-24 20:08:54.988007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.285 [2024-07-24 20:08:54.988036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.285 qpair failed and we were unable to recover it. 00:29:07.285 [2024-07-24 20:08:54.988505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.285 [2024-07-24 20:08:54.988534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.285 qpair failed and we were unable to recover it. 00:29:07.285 [2024-07-24 20:08:54.988796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.285 [2024-07-24 20:08:54.988823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.285 qpair failed and we were unable to recover it. 00:29:07.285 [2024-07-24 20:08:54.989300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.285 [2024-07-24 20:08:54.989329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.285 qpair failed and we were unable to recover it. 00:29:07.285 [2024-07-24 20:08:54.989776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.285 [2024-07-24 20:08:54.989805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.285 qpair failed and we were unable to recover it. 00:29:07.285 [2024-07-24 20:08:54.990275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.285 [2024-07-24 20:08:54.990304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.285 qpair failed and we were unable to recover it. 00:29:07.285 [2024-07-24 20:08:54.990650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.285 [2024-07-24 20:08:54.990680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.285 qpair failed and we were unable to recover it. 00:29:07.285 [2024-07-24 20:08:54.991141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.285 [2024-07-24 20:08:54.991169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.285 qpair failed and we were unable to recover it. 00:29:07.285 [2024-07-24 20:08:54.991633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.285 [2024-07-24 20:08:54.991662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.285 qpair failed and we were unable to recover it. 00:29:07.285 [2024-07-24 20:08:54.992030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.285 [2024-07-24 20:08:54.992058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.285 qpair failed and we were unable to recover it. 00:29:07.285 [2024-07-24 20:08:54.992522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.285 [2024-07-24 20:08:54.992551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.285 qpair failed and we were unable to recover it. 00:29:07.285 [2024-07-24 20:08:54.993026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.285 [2024-07-24 20:08:54.993053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.285 qpair failed and we were unable to recover it. 00:29:07.285 [2024-07-24 20:08:54.993515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.285 [2024-07-24 20:08:54.993544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.285 qpair failed and we were unable to recover it. 00:29:07.285 [2024-07-24 20:08:54.994026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.285 [2024-07-24 20:08:54.994053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.285 qpair failed and we were unable to recover it. 00:29:07.285 [2024-07-24 20:08:54.994414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.285 [2024-07-24 20:08:54.994454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.285 qpair failed and we were unable to recover it. 00:29:07.285 [2024-07-24 20:08:54.994989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.285 [2024-07-24 20:08:54.995017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.285 qpair failed and we were unable to recover it. 00:29:07.285 [2024-07-24 20:08:54.995474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.285 [2024-07-24 20:08:54.995502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.285 qpair failed and we were unable to recover it. 00:29:07.285 [2024-07-24 20:08:54.995859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.285 [2024-07-24 20:08:54.995892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.285 qpair failed and we were unable to recover it. 00:29:07.285 [2024-07-24 20:08:54.996363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.285 [2024-07-24 20:08:54.996393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.285 qpair failed and we were unable to recover it. 00:29:07.285 [2024-07-24 20:08:54.996695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.285 [2024-07-24 20:08:54.996735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.285 qpair failed and we were unable to recover it. 00:29:07.285 [2024-07-24 20:08:54.997191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.285 [2024-07-24 20:08:54.997247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.285 qpair failed and we were unable to recover it. 00:29:07.285 [2024-07-24 20:08:54.997723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.285 [2024-07-24 20:08:54.997750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.285 qpair failed and we were unable to recover it. 00:29:07.285 [2024-07-24 20:08:54.998228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.285 [2024-07-24 20:08:54.998257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.285 qpair failed and we were unable to recover it. 00:29:07.285 [2024-07-24 20:08:54.998641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.286 [2024-07-24 20:08:54.998674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.286 qpair failed and we were unable to recover it. 00:29:07.286 [2024-07-24 20:08:54.999154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.286 [2024-07-24 20:08:54.999181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.286 qpair failed and we were unable to recover it. 00:29:07.286 [2024-07-24 20:08:54.999557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.286 [2024-07-24 20:08:54.999586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.286 qpair failed and we were unable to recover it. 00:29:07.286 [2024-07-24 20:08:54.999827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.286 [2024-07-24 20:08:54.999854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.286 qpair failed and we were unable to recover it. 00:29:07.286 [2024-07-24 20:08:55.000216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.286 [2024-07-24 20:08:55.000249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.286 qpair failed and we were unable to recover it. 00:29:07.286 [2024-07-24 20:08:55.000405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.286 [2024-07-24 20:08:55.000436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.286 qpair failed and we were unable to recover it. 00:29:07.286 [2024-07-24 20:08:55.000939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.286 [2024-07-24 20:08:55.000967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.286 qpair failed and we were unable to recover it. 00:29:07.286 [2024-07-24 20:08:55.001430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.286 [2024-07-24 20:08:55.001461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.286 qpair failed and we were unable to recover it. 00:29:07.286 [2024-07-24 20:08:55.001930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.286 [2024-07-24 20:08:55.001957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.286 qpair failed and we were unable to recover it. 00:29:07.286 [2024-07-24 20:08:55.002228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.286 [2024-07-24 20:08:55.002256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.286 qpair failed and we were unable to recover it. 00:29:07.286 [2024-07-24 20:08:55.002636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.286 [2024-07-24 20:08:55.002668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.286 qpair failed and we were unable to recover it. 00:29:07.286 [2024-07-24 20:08:55.003160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.286 [2024-07-24 20:08:55.003189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.286 qpair failed and we were unable to recover it. 00:29:07.286 [2024-07-24 20:08:55.003385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.286 [2024-07-24 20:08:55.003413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.286 qpair failed and we were unable to recover it. 00:29:07.286 [2024-07-24 20:08:55.003876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.286 [2024-07-24 20:08:55.003904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.286 qpair failed and we were unable to recover it. 00:29:07.286 [2024-07-24 20:08:55.004256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.286 [2024-07-24 20:08:55.004288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.286 qpair failed and we were unable to recover it. 00:29:07.286 [2024-07-24 20:08:55.004765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.286 [2024-07-24 20:08:55.004792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.286 qpair failed and we were unable to recover it. 00:29:07.286 [2024-07-24 20:08:55.005042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.286 [2024-07-24 20:08:55.005068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.286 qpair failed and we were unable to recover it. 00:29:07.286 [2024-07-24 20:08:55.005543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.286 [2024-07-24 20:08:55.005573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.286 qpair failed and we were unable to recover it. 00:29:07.286 [2024-07-24 20:08:55.006051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.286 [2024-07-24 20:08:55.006079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.286 qpair failed and we were unable to recover it. 00:29:07.286 [2024-07-24 20:08:55.006560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.286 [2024-07-24 20:08:55.006589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.286 qpair failed and we were unable to recover it. 00:29:07.286 [2024-07-24 20:08:55.007105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.286 [2024-07-24 20:08:55.007133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.286 qpair failed and we were unable to recover it. 00:29:07.286 [2024-07-24 20:08:55.007505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.286 [2024-07-24 20:08:55.007533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.286 qpair failed and we were unable to recover it. 00:29:07.286 [2024-07-24 20:08:55.008032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.286 [2024-07-24 20:08:55.008060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.286 qpair failed and we were unable to recover it. 00:29:07.286 [2024-07-24 20:08:55.008328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.286 [2024-07-24 20:08:55.008357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.286 qpair failed and we were unable to recover it. 00:29:07.286 [2024-07-24 20:08:55.008838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.286 [2024-07-24 20:08:55.008866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.286 qpair failed and we were unable to recover it. 00:29:07.286 [2024-07-24 20:08:55.009351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.286 [2024-07-24 20:08:55.009380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.286 qpair failed and we were unable to recover it. 00:29:07.286 [2024-07-24 20:08:55.009639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.286 [2024-07-24 20:08:55.009666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.286 qpair failed and we were unable to recover it. 00:29:07.286 [2024-07-24 20:08:55.009882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.286 [2024-07-24 20:08:55.009910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.286 qpair failed and we were unable to recover it. 00:29:07.286 [2024-07-24 20:08:55.010416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.286 [2024-07-24 20:08:55.010445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.287 qpair failed and we were unable to recover it. 00:29:07.287 [2024-07-24 20:08:55.010827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.287 [2024-07-24 20:08:55.010855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.287 qpair failed and we were unable to recover it. 00:29:07.287 [2024-07-24 20:08:55.011334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.287 [2024-07-24 20:08:55.011362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.287 qpair failed and we were unable to recover it. 00:29:07.287 [2024-07-24 20:08:55.011850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.287 [2024-07-24 20:08:55.011878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.287 qpair failed and we were unable to recover it. 00:29:07.287 [2024-07-24 20:08:55.012106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.287 [2024-07-24 20:08:55.012133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.287 qpair failed and we were unable to recover it. 00:29:07.287 [2024-07-24 20:08:55.012598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.287 [2024-07-24 20:08:55.012627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.287 qpair failed and we were unable to recover it. 00:29:07.287 [2024-07-24 20:08:55.013106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.287 [2024-07-24 20:08:55.013134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.287 qpair failed and we were unable to recover it. 00:29:07.287 [2024-07-24 20:08:55.013605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.287 [2024-07-24 20:08:55.013635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.287 qpair failed and we were unable to recover it. 00:29:07.287 [2024-07-24 20:08:55.013990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.287 [2024-07-24 20:08:55.014023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.287 qpair failed and we were unable to recover it. 00:29:07.287 [2024-07-24 20:08:55.014462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.287 [2024-07-24 20:08:55.014491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.287 qpair failed and we were unable to recover it. 00:29:07.287 [2024-07-24 20:08:55.014987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.287 [2024-07-24 20:08:55.015014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.287 qpair failed and we were unable to recover it. 00:29:07.287 [2024-07-24 20:08:55.015495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.287 [2024-07-24 20:08:55.015524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.287 qpair failed and we were unable to recover it. 00:29:07.287 [2024-07-24 20:08:55.015895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.287 [2024-07-24 20:08:55.015922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.287 qpair failed and we were unable to recover it. 00:29:07.287 [2024-07-24 20:08:55.016358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.287 [2024-07-24 20:08:55.016387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.287 qpair failed and we were unable to recover it. 00:29:07.287 [2024-07-24 20:08:55.016855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.287 [2024-07-24 20:08:55.016882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.287 qpair failed and we were unable to recover it. 00:29:07.287 [2024-07-24 20:08:55.017359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.287 [2024-07-24 20:08:55.017388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.287 qpair failed and we were unable to recover it. 00:29:07.287 [2024-07-24 20:08:55.017851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.287 [2024-07-24 20:08:55.017879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.287 qpair failed and we were unable to recover it. 00:29:07.287 [2024-07-24 20:08:55.018350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.287 [2024-07-24 20:08:55.018379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.287 qpair failed and we were unable to recover it. 00:29:07.287 [2024-07-24 20:08:55.018837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.287 [2024-07-24 20:08:55.018865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.287 qpair failed and we were unable to recover it. 00:29:07.287 [2024-07-24 20:08:55.019343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.287 [2024-07-24 20:08:55.019371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.287 qpair failed and we were unable to recover it. 00:29:07.287 [2024-07-24 20:08:55.019737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.287 [2024-07-24 20:08:55.019764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.287 qpair failed and we were unable to recover it. 00:29:07.287 [2024-07-24 20:08:55.020228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.287 [2024-07-24 20:08:55.020255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.287 qpair failed and we were unable to recover it. 00:29:07.287 [2024-07-24 20:08:55.020766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.287 [2024-07-24 20:08:55.020795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.287 qpair failed and we were unable to recover it. 00:29:07.287 [2024-07-24 20:08:55.021050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.287 [2024-07-24 20:08:55.021077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.287 qpair failed and we were unable to recover it. 00:29:07.287 [2024-07-24 20:08:55.021322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.287 [2024-07-24 20:08:55.021352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.287 qpair failed and we were unable to recover it. 00:29:07.287 [2024-07-24 20:08:55.021822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.287 [2024-07-24 20:08:55.021849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.287 qpair failed and we were unable to recover it. 00:29:07.287 [2024-07-24 20:08:55.022167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.287 [2024-07-24 20:08:55.022195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.287 qpair failed and we were unable to recover it. 00:29:07.287 [2024-07-24 20:08:55.022714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.287 [2024-07-24 20:08:55.022743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.287 qpair failed and we were unable to recover it. 00:29:07.287 [2024-07-24 20:08:55.023212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.287 [2024-07-24 20:08:55.023240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.287 qpair failed and we were unable to recover it. 00:29:07.287 [2024-07-24 20:08:55.023549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.287 [2024-07-24 20:08:55.023577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.287 qpair failed and we were unable to recover it. 00:29:07.287 [2024-07-24 20:08:55.024068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.287 [2024-07-24 20:08:55.024095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.287 qpair failed and we were unable to recover it. 00:29:07.287 [2024-07-24 20:08:55.024366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.288 [2024-07-24 20:08:55.024395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.288 qpair failed and we were unable to recover it. 00:29:07.288 [2024-07-24 20:08:55.024845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.288 [2024-07-24 20:08:55.024873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.288 qpair failed and we were unable to recover it. 00:29:07.288 [2024-07-24 20:08:55.025352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.288 [2024-07-24 20:08:55.025381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.288 qpair failed and we were unable to recover it. 00:29:07.288 [2024-07-24 20:08:55.025842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.288 [2024-07-24 20:08:55.025870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.288 qpair failed and we were unable to recover it. 00:29:07.288 [2024-07-24 20:08:55.026343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.288 [2024-07-24 20:08:55.026372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.288 qpair failed and we were unable to recover it. 00:29:07.288 [2024-07-24 20:08:55.026834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.288 [2024-07-24 20:08:55.026862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.288 qpair failed and we were unable to recover it. 00:29:07.288 [2024-07-24 20:08:55.027351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.288 [2024-07-24 20:08:55.027379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.288 qpair failed and we were unable to recover it. 00:29:07.288 [2024-07-24 20:08:55.027840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.288 [2024-07-24 20:08:55.027868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.288 qpair failed and we were unable to recover it. 00:29:07.288 [2024-07-24 20:08:55.028134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.288 [2024-07-24 20:08:55.028160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.288 qpair failed and we were unable to recover it. 00:29:07.288 [2024-07-24 20:08:55.028646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.288 [2024-07-24 20:08:55.028675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.288 qpair failed and we were unable to recover it. 00:29:07.288 [2024-07-24 20:08:55.029149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.288 [2024-07-24 20:08:55.029178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.288 qpair failed and we were unable to recover it. 00:29:07.288 [2024-07-24 20:08:55.029598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.288 [2024-07-24 20:08:55.029628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.288 qpair failed and we were unable to recover it. 00:29:07.288 [2024-07-24 20:08:55.029894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.288 [2024-07-24 20:08:55.029922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.288 qpair failed and we were unable to recover it. 00:29:07.288 [2024-07-24 20:08:55.030402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.288 [2024-07-24 20:08:55.030431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.288 qpair failed and we were unable to recover it. 00:29:07.288 [2024-07-24 20:08:55.030904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.288 [2024-07-24 20:08:55.030931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.288 qpair failed and we were unable to recover it. 00:29:07.288 [2024-07-24 20:08:55.031430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.288 [2024-07-24 20:08:55.031458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.288 qpair failed and we were unable to recover it. 00:29:07.288 [2024-07-24 20:08:55.031925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.288 [2024-07-24 20:08:55.031952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.288 qpair failed and we were unable to recover it. 00:29:07.288 [2024-07-24 20:08:55.032413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.288 [2024-07-24 20:08:55.032447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.288 qpair failed and we were unable to recover it. 00:29:07.288 [2024-07-24 20:08:55.032916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.288 [2024-07-24 20:08:55.032944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.288 qpair failed and we were unable to recover it. 00:29:07.288 [2024-07-24 20:08:55.033430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.288 [2024-07-24 20:08:55.033461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.288 qpair failed and we were unable to recover it. 00:29:07.288 [2024-07-24 20:08:55.033931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.288 [2024-07-24 20:08:55.033958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.288 qpair failed and we were unable to recover it. 00:29:07.288 [2024-07-24 20:08:55.034542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.288 [2024-07-24 20:08:55.034637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.288 qpair failed and we were unable to recover it. 00:29:07.288 [2024-07-24 20:08:55.035189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.288 [2024-07-24 20:08:55.035239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.288 qpair failed and we were unable to recover it. 00:29:07.288 [2024-07-24 20:08:55.035712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.288 [2024-07-24 20:08:55.035742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.288 qpair failed and we were unable to recover it. 00:29:07.288 [2024-07-24 20:08:55.036118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.288 [2024-07-24 20:08:55.036147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.288 qpair failed and we were unable to recover it. 00:29:07.288 [2024-07-24 20:08:55.036668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.288 [2024-07-24 20:08:55.036697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.288 qpair failed and we were unable to recover it. 00:29:07.288 [2024-07-24 20:08:55.037028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.288 [2024-07-24 20:08:55.037059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.288 qpair failed and we were unable to recover it. 00:29:07.288 [2024-07-24 20:08:55.037544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.288 [2024-07-24 20:08:55.037576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.288 qpair failed and we were unable to recover it. 00:29:07.288 [2024-07-24 20:08:55.038080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.288 [2024-07-24 20:08:55.038109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.288 qpair failed and we were unable to recover it. 00:29:07.288 [2024-07-24 20:08:55.038611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.288 [2024-07-24 20:08:55.038642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.288 qpair failed and we were unable to recover it. 00:29:07.288 [2024-07-24 20:08:55.039121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.288 [2024-07-24 20:08:55.039149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.288 qpair failed and we were unable to recover it. 00:29:07.288 [2024-07-24 20:08:55.039729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.288 [2024-07-24 20:08:55.039762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.288 qpair failed and we were unable to recover it. 00:29:07.288 [2024-07-24 20:08:55.039988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.288 [2024-07-24 20:08:55.040017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.288 qpair failed and we were unable to recover it. 00:29:07.288 [2024-07-24 20:08:55.040531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.288 [2024-07-24 20:08:55.040561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.288 qpair failed and we were unable to recover it. 00:29:07.288 [2024-07-24 20:08:55.040887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.288 [2024-07-24 20:08:55.040916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.288 qpair failed and we were unable to recover it. 00:29:07.288 [2024-07-24 20:08:55.041406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.288 [2024-07-24 20:08:55.041436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.288 qpair failed and we were unable to recover it. 00:29:07.289 [2024-07-24 20:08:55.041952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.289 [2024-07-24 20:08:55.041982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.289 qpair failed and we were unable to recover it. 00:29:07.289 [2024-07-24 20:08:55.042501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.289 [2024-07-24 20:08:55.042596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.289 qpair failed and we were unable to recover it. 00:29:07.289 [2024-07-24 20:08:55.043151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.289 [2024-07-24 20:08:55.043187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.289 qpair failed and we were unable to recover it. 00:29:07.289 [2024-07-24 20:08:55.043683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.289 [2024-07-24 20:08:55.043714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.289 qpair failed and we were unable to recover it. 00:29:07.289 [2024-07-24 20:08:55.044197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.289 [2024-07-24 20:08:55.044237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.289 qpair failed and we were unable to recover it. 00:29:07.289 [2024-07-24 20:08:55.044707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.289 [2024-07-24 20:08:55.044741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.289 qpair failed and we were unable to recover it. 00:29:07.289 [2024-07-24 20:08:55.045214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.289 [2024-07-24 20:08:55.045246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.289 qpair failed and we were unable to recover it. 00:29:07.289 [2024-07-24 20:08:55.045738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.289 [2024-07-24 20:08:55.045769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.289 qpair failed and we were unable to recover it. 00:29:07.289 [2024-07-24 20:08:55.046253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.289 [2024-07-24 20:08:55.046292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.289 qpair failed and we were unable to recover it. 00:29:07.289 [2024-07-24 20:08:55.046686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.289 [2024-07-24 20:08:55.046717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.289 qpair failed and we were unable to recover it. 00:29:07.289 [2024-07-24 20:08:55.047037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.289 [2024-07-24 20:08:55.047068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.289 qpair failed and we were unable to recover it. 00:29:07.289 [2024-07-24 20:08:55.047545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.289 [2024-07-24 20:08:55.047575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.289 qpair failed and we were unable to recover it. 00:29:07.289 [2024-07-24 20:08:55.048045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.289 [2024-07-24 20:08:55.048076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.289 qpair failed and we were unable to recover it. 00:29:07.289 [2024-07-24 20:08:55.048543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.289 [2024-07-24 20:08:55.048574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.289 qpair failed and we were unable to recover it. 00:29:07.289 [2024-07-24 20:08:55.049043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.289 [2024-07-24 20:08:55.049074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.289 qpair failed and we were unable to recover it. 00:29:07.289 [2024-07-24 20:08:55.049554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.289 [2024-07-24 20:08:55.049585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.289 qpair failed and we were unable to recover it. 00:29:07.289 [2024-07-24 20:08:55.049941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.289 [2024-07-24 20:08:55.049971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.289 qpair failed and we were unable to recover it. 00:29:07.289 [2024-07-24 20:08:55.050364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.289 [2024-07-24 20:08:55.050395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.289 qpair failed and we were unable to recover it. 00:29:07.289 [2024-07-24 20:08:55.050858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.289 [2024-07-24 20:08:55.050887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.289 qpair failed and we were unable to recover it. 00:29:07.289 [2024-07-24 20:08:55.051380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.289 [2024-07-24 20:08:55.051412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.289 qpair failed and we were unable to recover it. 00:29:07.289 [2024-07-24 20:08:55.051897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.289 [2024-07-24 20:08:55.051927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.289 qpair failed and we were unable to recover it. 00:29:07.289 [2024-07-24 20:08:55.052425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.289 [2024-07-24 20:08:55.052455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.289 qpair failed and we were unable to recover it. 00:29:07.289 [2024-07-24 20:08:55.052923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.289 [2024-07-24 20:08:55.052957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.289 qpair failed and we were unable to recover it. 00:29:07.289 [2024-07-24 20:08:55.053459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.289 [2024-07-24 20:08:55.053490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.289 qpair failed and we were unable to recover it. 00:29:07.289 [2024-07-24 20:08:55.053745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.289 [2024-07-24 20:08:55.053775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.289 qpair failed and we were unable to recover it. 00:29:07.289 [2024-07-24 20:08:55.054105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.289 [2024-07-24 20:08:55.054136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.289 qpair failed and we were unable to recover it. 00:29:07.289 [2024-07-24 20:08:55.054607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.289 [2024-07-24 20:08:55.054639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.289 qpair failed and we were unable to recover it. 00:29:07.289 [2024-07-24 20:08:55.055122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.289 [2024-07-24 20:08:55.055154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.289 qpair failed and we were unable to recover it. 00:29:07.289 [2024-07-24 20:08:55.055677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.289 [2024-07-24 20:08:55.055711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.289 qpair failed and we were unable to recover it. 00:29:07.289 [2024-07-24 20:08:55.056158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.289 [2024-07-24 20:08:55.056189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.289 qpair failed and we were unable to recover it. 00:29:07.289 [2024-07-24 20:08:55.056671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.289 [2024-07-24 20:08:55.056702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.289 qpair failed and we were unable to recover it. 00:29:07.290 [2024-07-24 20:08:55.057125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.290 [2024-07-24 20:08:55.057154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.290 qpair failed and we were unable to recover it. 00:29:07.290 [2024-07-24 20:08:55.057651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.290 [2024-07-24 20:08:55.057682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.290 qpair failed and we were unable to recover it. 00:29:07.290 [2024-07-24 20:08:55.058009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.290 [2024-07-24 20:08:55.058042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.290 qpair failed and we were unable to recover it. 00:29:07.290 [2024-07-24 20:08:55.058535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.290 [2024-07-24 20:08:55.058567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.290 qpair failed and we were unable to recover it. 00:29:07.290 [2024-07-24 20:08:55.059066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.290 [2024-07-24 20:08:55.059097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.290 qpair failed and we were unable to recover it. 00:29:07.290 [2024-07-24 20:08:55.059461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.290 [2024-07-24 20:08:55.059496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.290 qpair failed and we were unable to recover it. 00:29:07.290 [2024-07-24 20:08:55.059860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.290 [2024-07-24 20:08:55.059900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.290 qpair failed and we were unable to recover it. 00:29:07.290 [2024-07-24 20:08:55.060376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.290 [2024-07-24 20:08:55.060408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.290 qpair failed and we were unable to recover it. 00:29:07.290 [2024-07-24 20:08:55.060676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.290 [2024-07-24 20:08:55.060705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.290 qpair failed and we were unable to recover it. 00:29:07.290 [2024-07-24 20:08:55.061162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.290 [2024-07-24 20:08:55.061194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.290 qpair failed and we were unable to recover it. 00:29:07.290 [2024-07-24 20:08:55.061569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.290 [2024-07-24 20:08:55.061599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.290 qpair failed and we were unable to recover it. 00:29:07.290 [2024-07-24 20:08:55.062097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.290 [2024-07-24 20:08:55.062128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.290 qpair failed and we were unable to recover it. 00:29:07.290 [2024-07-24 20:08:55.062607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.290 [2024-07-24 20:08:55.062639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.290 qpair failed and we were unable to recover it. 00:29:07.290 [2024-07-24 20:08:55.063045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.290 [2024-07-24 20:08:55.063077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.290 qpair failed and we were unable to recover it. 00:29:07.290 [2024-07-24 20:08:55.063604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.290 [2024-07-24 20:08:55.063636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.290 qpair failed and we were unable to recover it. 00:29:07.290 [2024-07-24 20:08:55.064123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.290 [2024-07-24 20:08:55.064153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.290 qpair failed and we were unable to recover it. 00:29:07.290 [2024-07-24 20:08:55.064630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.290 [2024-07-24 20:08:55.064661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.290 qpair failed and we were unable to recover it. 00:29:07.290 [2024-07-24 20:08:55.065026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.290 [2024-07-24 20:08:55.065066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.290 qpair failed and we were unable to recover it. 00:29:07.290 [2024-07-24 20:08:55.065545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.290 [2024-07-24 20:08:55.065576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.290 qpair failed and we were unable to recover it. 00:29:07.290 [2024-07-24 20:08:55.065853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.290 [2024-07-24 20:08:55.065883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.290 qpair failed and we were unable to recover it. 00:29:07.290 [2024-07-24 20:08:55.066174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.290 [2024-07-24 20:08:55.066213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.290 qpair failed and we were unable to recover it. 00:29:07.290 [2024-07-24 20:08:55.066672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.290 [2024-07-24 20:08:55.066703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.290 qpair failed and we were unable to recover it. 00:29:07.290 [2024-07-24 20:08:55.067174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.290 [2024-07-24 20:08:55.067211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.290 qpair failed and we were unable to recover it. 00:29:07.290 [2024-07-24 20:08:55.067675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.290 [2024-07-24 20:08:55.067705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.290 qpair failed and we were unable to recover it. 00:29:07.290 [2024-07-24 20:08:55.068193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.290 [2024-07-24 20:08:55.068233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.290 qpair failed and we were unable to recover it. 00:29:07.290 [2024-07-24 20:08:55.068693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.290 [2024-07-24 20:08:55.068724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.290 qpair failed and we were unable to recover it. 00:29:07.290 [2024-07-24 20:08:55.069173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.290 [2024-07-24 20:08:55.069215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.290 qpair failed and we were unable to recover it. 00:29:07.290 [2024-07-24 20:08:55.069688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.290 [2024-07-24 20:08:55.069718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.290 qpair failed and we were unable to recover it. 00:29:07.290 [2024-07-24 20:08:55.070068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.290 [2024-07-24 20:08:55.070100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.290 qpair failed and we were unable to recover it. 00:29:07.290 [2024-07-24 20:08:55.070591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.290 [2024-07-24 20:08:55.070623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.290 qpair failed and we were unable to recover it. 00:29:07.290 [2024-07-24 20:08:55.070986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.290 [2024-07-24 20:08:55.071017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.290 qpair failed and we were unable to recover it. 00:29:07.290 [2024-07-24 20:08:55.071581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.290 [2024-07-24 20:08:55.071680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.290 qpair failed and we were unable to recover it. 00:29:07.290 [2024-07-24 20:08:55.072121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.290 [2024-07-24 20:08:55.072161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.290 qpair failed and we were unable to recover it. 00:29:07.290 [2024-07-24 20:08:55.072664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.290 [2024-07-24 20:08:55.072698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.290 qpair failed and we were unable to recover it. 00:29:07.290 [2024-07-24 20:08:55.073184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.290 [2024-07-24 20:08:55.073228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.290 qpair failed and we were unable to recover it. 00:29:07.290 [2024-07-24 20:08:55.073625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.290 [2024-07-24 20:08:55.073657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.290 qpair failed and we were unable to recover it. 00:29:07.290 [2024-07-24 20:08:55.074172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.291 [2024-07-24 20:08:55.074215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.291 qpair failed and we were unable to recover it. 00:29:07.291 [2024-07-24 20:08:55.074686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.291 [2024-07-24 20:08:55.074717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.291 qpair failed and we were unable to recover it. 00:29:07.291 [2024-07-24 20:08:55.075119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.291 [2024-07-24 20:08:55.075148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.291 qpair failed and we were unable to recover it. 00:29:07.291 [2024-07-24 20:08:55.075780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.291 [2024-07-24 20:08:55.075881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.291 qpair failed and we were unable to recover it. 00:29:07.291 [2024-07-24 20:08:55.076556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.291 [2024-07-24 20:08:55.076654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.291 qpair failed and we were unable to recover it. 00:29:07.291 [2024-07-24 20:08:55.077195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.291 [2024-07-24 20:08:55.077250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.291 qpair failed and we were unable to recover it. 00:29:07.291 [2024-07-24 20:08:55.077615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.291 [2024-07-24 20:08:55.077648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.291 qpair failed and we were unable to recover it. 00:29:07.291 [2024-07-24 20:08:55.078129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.291 [2024-07-24 20:08:55.078161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.291 qpair failed and we were unable to recover it. 00:29:07.291 [2024-07-24 20:08:55.078770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.291 [2024-07-24 20:08:55.078802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.291 qpair failed and we were unable to recover it. 00:29:07.291 [2024-07-24 20:08:55.079455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.291 [2024-07-24 20:08:55.079553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.291 qpair failed and we were unable to recover it. 00:29:07.291 [2024-07-24 20:08:55.080088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.291 [2024-07-24 20:08:55.080127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.291 qpair failed and we were unable to recover it. 00:29:07.291 [2024-07-24 20:08:55.080623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.291 [2024-07-24 20:08:55.080658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.291 qpair failed and we were unable to recover it. 00:29:07.291 [2024-07-24 20:08:55.081009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.291 [2024-07-24 20:08:55.081040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.291 qpair failed and we were unable to recover it. 00:29:07.291 [2024-07-24 20:08:55.081498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.291 [2024-07-24 20:08:55.081530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.291 qpair failed and we were unable to recover it. 00:29:07.291 [2024-07-24 20:08:55.082018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.291 [2024-07-24 20:08:55.082049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.291 qpair failed and we were unable to recover it. 00:29:07.291 [2024-07-24 20:08:55.082514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.291 [2024-07-24 20:08:55.082545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.291 qpair failed and we were unable to recover it. 00:29:07.291 [2024-07-24 20:08:55.082996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.291 [2024-07-24 20:08:55.083028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.291 qpair failed and we were unable to recover it. 00:29:07.291 [2024-07-24 20:08:55.083500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.291 [2024-07-24 20:08:55.083534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.291 qpair failed and we were unable to recover it. 00:29:07.291 [2024-07-24 20:08:55.084018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.291 [2024-07-24 20:08:55.084049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.291 qpair failed and we were unable to recover it. 00:29:07.291 [2024-07-24 20:08:55.084543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.291 [2024-07-24 20:08:55.084576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.291 qpair failed and we were unable to recover it. 00:29:07.291 [2024-07-24 20:08:55.085035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.291 [2024-07-24 20:08:55.085064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.291 qpair failed and we were unable to recover it. 00:29:07.291 [2024-07-24 20:08:55.085434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.291 [2024-07-24 20:08:55.085478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.291 qpair failed and we were unable to recover it. 00:29:07.291 [2024-07-24 20:08:55.085747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.291 [2024-07-24 20:08:55.085776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.291 qpair failed and we were unable to recover it. 00:29:07.291 [2024-07-24 20:08:55.086235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.291 [2024-07-24 20:08:55.086265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.291 qpair failed and we were unable to recover it. 00:29:07.291 [2024-07-24 20:08:55.086761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.291 [2024-07-24 20:08:55.086790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.291 qpair failed and we were unable to recover it. 00:29:07.291 [2024-07-24 20:08:55.087157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.291 [2024-07-24 20:08:55.087187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.291 qpair failed and we were unable to recover it. 00:29:07.291 [2024-07-24 20:08:55.087645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.291 [2024-07-24 20:08:55.087675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.291 qpair failed and we were unable to recover it. 00:29:07.291 [2024-07-24 20:08:55.088150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.291 [2024-07-24 20:08:55.088180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.291 qpair failed and we were unable to recover it. 00:29:07.291 [2024-07-24 20:08:55.088498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.291 [2024-07-24 20:08:55.088529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.291 qpair failed and we were unable to recover it. 00:29:07.291 [2024-07-24 20:08:55.088997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.291 [2024-07-24 20:08:55.089026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.291 qpair failed and we were unable to recover it. 00:29:07.291 [2024-07-24 20:08:55.089500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.291 [2024-07-24 20:08:55.089532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.291 qpair failed and we were unable to recover it. 00:29:07.291 [2024-07-24 20:08:55.089808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.291 [2024-07-24 20:08:55.089838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.291 qpair failed and we were unable to recover it. 00:29:07.291 [2024-07-24 20:08:55.090288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.291 [2024-07-24 20:08:55.090318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.292 qpair failed and we were unable to recover it. 00:29:07.292 [2024-07-24 20:08:55.090789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.292 [2024-07-24 20:08:55.090819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.292 qpair failed and we were unable to recover it. 00:29:07.292 [2024-07-24 20:08:55.091152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.292 [2024-07-24 20:08:55.091183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.292 qpair failed and we were unable to recover it. 00:29:07.292 [2024-07-24 20:08:55.091705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.292 [2024-07-24 20:08:55.091737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.292 qpair failed and we were unable to recover it. 00:29:07.292 [2024-07-24 20:08:55.092228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.292 [2024-07-24 20:08:55.092261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.292 qpair failed and we were unable to recover it. 00:29:07.292 [2024-07-24 20:08:55.092634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.292 [2024-07-24 20:08:55.092675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.292 qpair failed and we were unable to recover it. 00:29:07.292 [2024-07-24 20:08:55.093101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.292 [2024-07-24 20:08:55.093132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.292 qpair failed and we were unable to recover it. 00:29:07.292 [2024-07-24 20:08:55.093469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.292 [2024-07-24 20:08:55.093503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.292 qpair failed and we were unable to recover it. 00:29:07.292 [2024-07-24 20:08:55.093861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.292 [2024-07-24 20:08:55.093893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.292 qpair failed and we were unable to recover it. 00:29:07.292 [2024-07-24 20:08:55.094362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.292 [2024-07-24 20:08:55.094394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.292 qpair failed and we were unable to recover it. 00:29:07.292 [2024-07-24 20:08:55.094725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.292 [2024-07-24 20:08:55.094756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.292 qpair failed and we were unable to recover it. 00:29:07.292 [2024-07-24 20:08:55.095125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.292 [2024-07-24 20:08:55.095157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.292 qpair failed and we were unable to recover it. 00:29:07.292 [2024-07-24 20:08:55.095654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.292 [2024-07-24 20:08:55.095685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.292 qpair failed and we were unable to recover it. 00:29:07.292 [2024-07-24 20:08:55.095923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.292 [2024-07-24 20:08:55.095952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.292 qpair failed and we were unable to recover it. 00:29:07.292 [2024-07-24 20:08:55.096428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.292 [2024-07-24 20:08:55.096458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.292 qpair failed and we were unable to recover it. 00:29:07.292 [2024-07-24 20:08:55.096936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.292 [2024-07-24 20:08:55.096965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.292 qpair failed and we were unable to recover it. 00:29:07.292 [2024-07-24 20:08:55.097411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.292 [2024-07-24 20:08:55.097443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.292 qpair failed and we were unable to recover it. 00:29:07.292 [2024-07-24 20:08:55.097915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.292 [2024-07-24 20:08:55.097946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.292 qpair failed and we were unable to recover it. 00:29:07.292 [2024-07-24 20:08:55.098303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.292 [2024-07-24 20:08:55.098335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.292 qpair failed and we were unable to recover it. 00:29:07.292 [2024-07-24 20:08:55.098841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.292 [2024-07-24 20:08:55.098871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.292 qpair failed and we were unable to recover it. 00:29:07.292 [2024-07-24 20:08:55.099195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.292 [2024-07-24 20:08:55.099242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.292 qpair failed and we were unable to recover it. 00:29:07.292 [2024-07-24 20:08:55.099515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.292 [2024-07-24 20:08:55.099545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.292 qpair failed and we were unable to recover it. 00:29:07.292 [2024-07-24 20:08:55.100022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.292 [2024-07-24 20:08:55.100053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.292 qpair failed and we were unable to recover it. 00:29:07.292 [2024-07-24 20:08:55.100313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.292 [2024-07-24 20:08:55.100343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.292 qpair failed and we were unable to recover it. 00:29:07.292 [2024-07-24 20:08:55.100840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.292 [2024-07-24 20:08:55.100869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.292 qpair failed and we were unable to recover it. 00:29:07.292 [2024-07-24 20:08:55.101343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.292 [2024-07-24 20:08:55.101376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.292 qpair failed and we were unable to recover it. 00:29:07.292 [2024-07-24 20:08:55.101861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.292 [2024-07-24 20:08:55.101892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.292 qpair failed and we were unable to recover it. 00:29:07.292 [2024-07-24 20:08:55.102357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.292 [2024-07-24 20:08:55.102388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.292 qpair failed and we were unable to recover it. 00:29:07.292 [2024-07-24 20:08:55.102768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.292 [2024-07-24 20:08:55.102799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.292 qpair failed and we were unable to recover it. 00:29:07.292 [2024-07-24 20:08:55.103252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.292 [2024-07-24 20:08:55.103290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.292 qpair failed and we were unable to recover it. 00:29:07.292 [2024-07-24 20:08:55.103803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.292 [2024-07-24 20:08:55.103833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.292 qpair failed and we were unable to recover it. 00:29:07.292 [2024-07-24 20:08:55.104291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.292 [2024-07-24 20:08:55.104321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.292 qpair failed and we were unable to recover it. 00:29:07.292 [2024-07-24 20:08:55.104616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.292 [2024-07-24 20:08:55.104645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.292 qpair failed and we were unable to recover it. 00:29:07.292 [2024-07-24 20:08:55.105108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.292 [2024-07-24 20:08:55.105138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.292 qpair failed and we were unable to recover it. 00:29:07.292 [2024-07-24 20:08:55.105611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.292 [2024-07-24 20:08:55.105642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.292 qpair failed and we were unable to recover it. 00:29:07.292 [2024-07-24 20:08:55.106120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.292 [2024-07-24 20:08:55.106151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.292 qpair failed and we were unable to recover it. 00:29:07.292 [2024-07-24 20:08:55.106621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.293 [2024-07-24 20:08:55.106652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.293 qpair failed and we were unable to recover it. 00:29:07.293 [2024-07-24 20:08:55.106906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.293 [2024-07-24 20:08:55.106935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.293 qpair failed and we were unable to recover it. 00:29:07.293 [2024-07-24 20:08:55.107295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.293 [2024-07-24 20:08:55.107330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.293 qpair failed and we were unable to recover it. 00:29:07.293 [2024-07-24 20:08:55.107693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.293 [2024-07-24 20:08:55.107732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.293 qpair failed and we were unable to recover it. 00:29:07.293 [2024-07-24 20:08:55.108184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.293 [2024-07-24 20:08:55.108225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.293 qpair failed and we were unable to recover it. 00:29:07.293 [2024-07-24 20:08:55.108585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.293 [2024-07-24 20:08:55.108614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.293 qpair failed and we were unable to recover it. 00:29:07.293 [2024-07-24 20:08:55.108865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.293 [2024-07-24 20:08:55.108893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.293 qpair failed and we were unable to recover it. 00:29:07.293 [2024-07-24 20:08:55.109392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.293 [2024-07-24 20:08:55.109423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.293 qpair failed and we were unable to recover it. 00:29:07.293 [2024-07-24 20:08:55.109878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.293 [2024-07-24 20:08:55.109908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.293 qpair failed and we were unable to recover it. 00:29:07.293 [2024-07-24 20:08:55.110391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.293 [2024-07-24 20:08:55.110421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.293 qpair failed and we were unable to recover it. 00:29:07.293 [2024-07-24 20:08:55.110906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.293 [2024-07-24 20:08:55.110935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.293 qpair failed and we were unable to recover it. 00:29:07.293 [2024-07-24 20:08:55.111411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.293 [2024-07-24 20:08:55.111442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.293 qpair failed and we were unable to recover it. 00:29:07.293 [2024-07-24 20:08:55.111803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.293 [2024-07-24 20:08:55.111838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.293 qpair failed and we were unable to recover it. 00:29:07.293 [2024-07-24 20:08:55.112304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.293 [2024-07-24 20:08:55.112335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.293 qpair failed and we were unable to recover it. 00:29:07.293 [2024-07-24 20:08:55.112834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.293 [2024-07-24 20:08:55.112866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.293 qpair failed and we were unable to recover it. 00:29:07.293 [2024-07-24 20:08:55.113334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.293 [2024-07-24 20:08:55.113365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.293 qpair failed and we were unable to recover it. 00:29:07.293 [2024-07-24 20:08:55.113850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.293 [2024-07-24 20:08:55.113881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.293 qpair failed and we were unable to recover it. 00:29:07.293 [2024-07-24 20:08:55.114343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.293 [2024-07-24 20:08:55.114376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.293 qpair failed and we were unable to recover it. 00:29:07.293 [2024-07-24 20:08:55.114625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.293 [2024-07-24 20:08:55.114654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.293 qpair failed and we were unable to recover it. 00:29:07.293 [2024-07-24 20:08:55.114911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.293 [2024-07-24 20:08:55.114940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.293 qpair failed and we were unable to recover it. 00:29:07.293 [2024-07-24 20:08:55.115307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.293 [2024-07-24 20:08:55.115339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.293 qpair failed and we were unable to recover it. 00:29:07.293 [2024-07-24 20:08:55.115832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.293 [2024-07-24 20:08:55.115862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.293 qpair failed and we were unable to recover it. 00:29:07.293 [2024-07-24 20:08:55.116360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.293 [2024-07-24 20:08:55.116391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.293 qpair failed and we were unable to recover it. 00:29:07.293 [2024-07-24 20:08:55.116865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.293 [2024-07-24 20:08:55.116895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.293 qpair failed and we were unable to recover it. 00:29:07.293 [2024-07-24 20:08:55.117388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.293 [2024-07-24 20:08:55.117421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.293 qpair failed and we were unable to recover it. 00:29:07.293 [2024-07-24 20:08:55.117878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.293 [2024-07-24 20:08:55.117908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.293 qpair failed and we were unable to recover it. 00:29:07.293 [2024-07-24 20:08:55.118400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.293 [2024-07-24 20:08:55.118430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.293 qpair failed and we were unable to recover it. 00:29:07.293 [2024-07-24 20:08:55.118704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.293 [2024-07-24 20:08:55.118732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.293 qpair failed and we were unable to recover it. 00:29:07.293 [2024-07-24 20:08:55.119222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.293 [2024-07-24 20:08:55.119254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.293 qpair failed and we were unable to recover it. 00:29:07.293 [2024-07-24 20:08:55.119515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.293 [2024-07-24 20:08:55.119544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.293 qpair failed and we were unable to recover it. 00:29:07.293 [2024-07-24 20:08:55.119813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.293 [2024-07-24 20:08:55.119843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.293 qpair failed and we were unable to recover it. 00:29:07.293 [2024-07-24 20:08:55.120301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.293 [2024-07-24 20:08:55.120331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.293 qpair failed and we were unable to recover it. 00:29:07.293 [2024-07-24 20:08:55.120832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.293 [2024-07-24 20:08:55.120861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.293 qpair failed and we were unable to recover it. 00:29:07.293 [2024-07-24 20:08:55.121338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.293 [2024-07-24 20:08:55.121376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.293 qpair failed and we were unable to recover it. 00:29:07.293 [2024-07-24 20:08:55.121898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.293 [2024-07-24 20:08:55.121929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.293 qpair failed and we were unable to recover it. 00:29:07.293 [2024-07-24 20:08:55.122399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.293 [2024-07-24 20:08:55.122430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.293 qpair failed and we were unable to recover it. 00:29:07.293 [2024-07-24 20:08:55.122878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.293 [2024-07-24 20:08:55.122907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.293 qpair failed and we were unable to recover it. 00:29:07.293 [2024-07-24 20:08:55.123380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.293 [2024-07-24 20:08:55.123411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.293 qpair failed and we were unable to recover it. 00:29:07.293 [2024-07-24 20:08:55.123898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.294 [2024-07-24 20:08:55.123930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.294 qpair failed and we were unable to recover it. 00:29:07.294 [2024-07-24 20:08:55.124402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.294 [2024-07-24 20:08:55.124433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.294 qpair failed and we were unable to recover it. 00:29:07.294 [2024-07-24 20:08:55.124919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.294 [2024-07-24 20:08:55.124948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.294 qpair failed and we were unable to recover it. 00:29:07.294 [2024-07-24 20:08:55.125425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.294 [2024-07-24 20:08:55.125456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.294 qpair failed and we were unable to recover it. 00:29:07.294 [2024-07-24 20:08:55.125939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.294 [2024-07-24 20:08:55.125969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.294 qpair failed and we were unable to recover it. 00:29:07.294 [2024-07-24 20:08:55.126441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.294 [2024-07-24 20:08:55.126472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.294 qpair failed and we were unable to recover it. 00:29:07.294 [2024-07-24 20:08:55.126969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.294 [2024-07-24 20:08:55.126998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.294 qpair failed and we were unable to recover it. 00:29:07.294 [2024-07-24 20:08:55.127424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.294 [2024-07-24 20:08:55.127524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.294 qpair failed and we were unable to recover it. 00:29:07.294 [2024-07-24 20:08:55.127933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.294 [2024-07-24 20:08:55.127971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.294 qpair failed and we were unable to recover it. 00:29:07.294 [2024-07-24 20:08:55.128497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.294 [2024-07-24 20:08:55.128533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.294 qpair failed and we were unable to recover it. 00:29:07.294 [2024-07-24 20:08:55.129013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.294 [2024-07-24 20:08:55.129045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.294 qpair failed and we were unable to recover it. 00:29:07.294 [2024-07-24 20:08:55.129554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.294 [2024-07-24 20:08:55.129587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.294 qpair failed and we were unable to recover it. 00:29:07.294 [2024-07-24 20:08:55.130077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.294 [2024-07-24 20:08:55.130109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.294 qpair failed and we were unable to recover it. 00:29:07.294 [2024-07-24 20:08:55.130593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.294 [2024-07-24 20:08:55.130626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.294 qpair failed and we were unable to recover it. 00:29:07.294 [2024-07-24 20:08:55.131031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.294 [2024-07-24 20:08:55.131062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.294 qpair failed and we were unable to recover it. 00:29:07.294 [2024-07-24 20:08:55.131544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.294 [2024-07-24 20:08:55.131576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.294 qpair failed and we were unable to recover it. 00:29:07.294 [2024-07-24 20:08:55.132061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.294 [2024-07-24 20:08:55.132093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.294 qpair failed and we were unable to recover it. 00:29:07.294 [2024-07-24 20:08:55.132581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.294 [2024-07-24 20:08:55.132612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.294 qpair failed and we were unable to recover it. 00:29:07.294 [2024-07-24 20:08:55.133100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.294 [2024-07-24 20:08:55.133131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.294 qpair failed and we were unable to recover it. 00:29:07.294 [2024-07-24 20:08:55.133591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.294 [2024-07-24 20:08:55.133625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.294 qpair failed and we were unable to recover it. 00:29:07.294 [2024-07-24 20:08:55.134109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.294 [2024-07-24 20:08:55.134141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.294 qpair failed and we were unable to recover it. 00:29:07.294 [2024-07-24 20:08:55.134634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.294 [2024-07-24 20:08:55.134665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.294 qpair failed and we were unable to recover it. 00:29:07.294 [2024-07-24 20:08:55.135164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.294 [2024-07-24 20:08:55.135194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.294 qpair failed and we were unable to recover it. 00:29:07.294 [2024-07-24 20:08:55.135711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.294 [2024-07-24 20:08:55.135742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.294 qpair failed and we were unable to recover it. 00:29:07.294 [2024-07-24 20:08:55.136238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.294 [2024-07-24 20:08:55.136274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.294 qpair failed and we were unable to recover it. 00:29:07.294 [2024-07-24 20:08:55.136742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.294 [2024-07-24 20:08:55.136772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.294 qpair failed and we were unable to recover it. 00:29:07.294 [2024-07-24 20:08:55.137264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.294 [2024-07-24 20:08:55.137297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.294 qpair failed and we were unable to recover it. 00:29:07.294 [2024-07-24 20:08:55.137769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.294 [2024-07-24 20:08:55.137800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.294 qpair failed and we were unable to recover it. 00:29:07.294 [2024-07-24 20:08:55.138286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.294 [2024-07-24 20:08:55.138317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.294 qpair failed and we were unable to recover it. 00:29:07.294 [2024-07-24 20:08:55.138577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.294 [2024-07-24 20:08:55.138607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.294 qpair failed and we were unable to recover it. 00:29:07.294 [2024-07-24 20:08:55.139093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.294 [2024-07-24 20:08:55.139124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.294 qpair failed and we were unable to recover it. 00:29:07.294 [2024-07-24 20:08:55.139598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.294 [2024-07-24 20:08:55.139630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.294 qpair failed and we were unable to recover it. 00:29:07.294 [2024-07-24 20:08:55.139944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.294 [2024-07-24 20:08:55.139986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.294 qpair failed and we were unable to recover it. 00:29:07.294 [2024-07-24 20:08:55.140497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.294 [2024-07-24 20:08:55.140528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.294 qpair failed and we were unable to recover it. 00:29:07.294 [2024-07-24 20:08:55.141034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.294 [2024-07-24 20:08:55.141064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.294 qpair failed and we were unable to recover it. 00:29:07.294 [2024-07-24 20:08:55.141552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.294 [2024-07-24 20:08:55.141592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.294 qpair failed and we were unable to recover it. 00:29:07.294 [2024-07-24 20:08:55.142072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.294 [2024-07-24 20:08:55.142103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.294 qpair failed and we were unable to recover it. 00:29:07.295 [2024-07-24 20:08:55.142591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.295 [2024-07-24 20:08:55.142622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.295 qpair failed and we were unable to recover it. 00:29:07.295 [2024-07-24 20:08:55.143107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.295 [2024-07-24 20:08:55.143138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.295 qpair failed and we were unable to recover it. 00:29:07.295 [2024-07-24 20:08:55.143414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.295 [2024-07-24 20:08:55.143446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.295 qpair failed and we were unable to recover it. 00:29:07.295 [2024-07-24 20:08:55.143932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.295 [2024-07-24 20:08:55.143962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.295 qpair failed and we were unable to recover it. 00:29:07.295 [2024-07-24 20:08:55.144447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.295 [2024-07-24 20:08:55.144479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.295 qpair failed and we were unable to recover it. 00:29:07.295 [2024-07-24 20:08:55.144934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.295 [2024-07-24 20:08:55.144965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.295 qpair failed and we were unable to recover it. 00:29:07.295 [2024-07-24 20:08:55.145340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.295 [2024-07-24 20:08:55.145383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.295 qpair failed and we were unable to recover it. 00:29:07.295 [2024-07-24 20:08:55.145684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.295 [2024-07-24 20:08:55.145715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.295 qpair failed and we were unable to recover it. 00:29:07.295 [2024-07-24 20:08:55.146186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.295 [2024-07-24 20:08:55.146224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.295 qpair failed and we were unable to recover it. 00:29:07.295 [2024-07-24 20:08:55.146702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.295 [2024-07-24 20:08:55.146732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.295 qpair failed and we were unable to recover it. 00:29:07.295 [2024-07-24 20:08:55.146864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.295 [2024-07-24 20:08:55.146893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.295 qpair failed and we were unable to recover it. 00:29:07.295 [2024-07-24 20:08:55.147043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.295 [2024-07-24 20:08:55.147071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.295 qpair failed and we were unable to recover it. 00:29:07.295 [2024-07-24 20:08:55.147474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.295 [2024-07-24 20:08:55.147506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.295 qpair failed and we were unable to recover it. 00:29:07.295 [2024-07-24 20:08:55.147982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.295 [2024-07-24 20:08:55.148014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.295 qpair failed and we were unable to recover it. 00:29:07.295 [2024-07-24 20:08:55.148487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.295 [2024-07-24 20:08:55.148520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.295 qpair failed and we were unable to recover it. 00:29:07.295 [2024-07-24 20:08:55.149008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.295 [2024-07-24 20:08:55.149038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.295 qpair failed and we were unable to recover it. 00:29:07.295 [2024-07-24 20:08:55.149531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.295 [2024-07-24 20:08:55.149565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.295 qpair failed and we were unable to recover it. 00:29:07.295 [2024-07-24 20:08:55.149831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.295 [2024-07-24 20:08:55.149861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.295 qpair failed and we were unable to recover it. 00:29:07.295 [2024-07-24 20:08:55.150328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.295 [2024-07-24 20:08:55.150360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.295 qpair failed and we were unable to recover it. 00:29:07.295 [2024-07-24 20:08:55.150829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.295 [2024-07-24 20:08:55.150859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.295 qpair failed and we were unable to recover it. 00:29:07.295 [2024-07-24 20:08:55.151081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.295 [2024-07-24 20:08:55.151112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.295 qpair failed and we were unable to recover it. 00:29:07.295 [2024-07-24 20:08:55.151583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.295 [2024-07-24 20:08:55.151615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.295 qpair failed and we were unable to recover it. 00:29:07.295 [2024-07-24 20:08:55.152023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.295 [2024-07-24 20:08:55.152053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.295 qpair failed and we were unable to recover it. 00:29:07.295 [2024-07-24 20:08:55.152532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.295 [2024-07-24 20:08:55.152562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.295 qpair failed and we were unable to recover it. 00:29:07.295 [2024-07-24 20:08:55.153040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.295 [2024-07-24 20:08:55.153071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.295 qpair failed and we were unable to recover it. 00:29:07.295 [2024-07-24 20:08:55.153548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.295 [2024-07-24 20:08:55.153581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.295 qpair failed and we were unable to recover it. 00:29:07.295 [2024-07-24 20:08:55.154055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.295 [2024-07-24 20:08:55.154086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.295 qpair failed and we were unable to recover it. 00:29:07.295 [2024-07-24 20:08:55.154542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.295 [2024-07-24 20:08:55.154573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.295 qpair failed and we were unable to recover it. 00:29:07.295 [2024-07-24 20:08:55.155049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.295 [2024-07-24 20:08:55.155081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.295 qpair failed and we were unable to recover it. 00:29:07.295 [2024-07-24 20:08:55.155546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.295 [2024-07-24 20:08:55.155577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.295 qpair failed and we were unable to recover it. 00:29:07.295 [2024-07-24 20:08:55.156051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.295 [2024-07-24 20:08:55.156081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.295 qpair failed and we were unable to recover it. 00:29:07.295 [2024-07-24 20:08:55.156493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.295 [2024-07-24 20:08:55.156524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.295 qpair failed and we were unable to recover it. 00:29:07.295 [2024-07-24 20:08:55.156887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.295 [2024-07-24 20:08:55.156921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.295 qpair failed and we were unable to recover it. 00:29:07.295 [2024-07-24 20:08:55.157447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.295 [2024-07-24 20:08:55.157479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.295 qpair failed and we were unable to recover it. 00:29:07.295 [2024-07-24 20:08:55.157949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.295 [2024-07-24 20:08:55.157984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.295 qpair failed and we were unable to recover it. 00:29:07.295 [2024-07-24 20:08:55.158367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.295 [2024-07-24 20:08:55.158398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.295 qpair failed and we were unable to recover it. 00:29:07.295 [2024-07-24 20:08:55.158859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.295 [2024-07-24 20:08:55.158889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.295 qpair failed and we were unable to recover it. 00:29:07.296 [2024-07-24 20:08:55.159385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.296 [2024-07-24 20:08:55.159417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.296 qpair failed and we were unable to recover it. 00:29:07.296 [2024-07-24 20:08:55.159902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.296 [2024-07-24 20:08:55.159940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.296 qpair failed and we were unable to recover it. 00:29:07.296 [2024-07-24 20:08:55.160395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.296 [2024-07-24 20:08:55.160427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.296 qpair failed and we were unable to recover it. 00:29:07.296 [2024-07-24 20:08:55.160904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.296 [2024-07-24 20:08:55.160934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.296 qpair failed and we were unable to recover it. 00:29:07.296 [2024-07-24 20:08:55.161419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.296 [2024-07-24 20:08:55.161452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.296 qpair failed and we were unable to recover it. 00:29:07.296 [2024-07-24 20:08:55.161810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.296 [2024-07-24 20:08:55.161841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.296 qpair failed and we were unable to recover it. 00:29:07.296 [2024-07-24 20:08:55.162333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.296 [2024-07-24 20:08:55.162363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.296 qpair failed and we were unable to recover it. 00:29:07.296 [2024-07-24 20:08:55.162845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.296 [2024-07-24 20:08:55.162876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.296 qpair failed and we were unable to recover it. 00:29:07.296 [2024-07-24 20:08:55.163126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.296 [2024-07-24 20:08:55.163155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.296 qpair failed and we were unable to recover it. 00:29:07.296 [2024-07-24 20:08:55.163662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.296 [2024-07-24 20:08:55.163693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.296 qpair failed and we were unable to recover it. 00:29:07.296 [2024-07-24 20:08:55.163939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.296 [2024-07-24 20:08:55.163968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.296 qpair failed and we were unable to recover it. 00:29:07.296 [2024-07-24 20:08:55.164437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.296 [2024-07-24 20:08:55.164468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.296 qpair failed and we were unable to recover it. 00:29:07.296 [2024-07-24 20:08:55.164865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.296 [2024-07-24 20:08:55.164896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.296 qpair failed and we were unable to recover it. 00:29:07.296 [2024-07-24 20:08:55.165371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.296 [2024-07-24 20:08:55.165403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.296 qpair failed and we were unable to recover it. 00:29:07.296 [2024-07-24 20:08:55.165894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.296 [2024-07-24 20:08:55.165925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.296 qpair failed and we were unable to recover it. 00:29:07.296 [2024-07-24 20:08:55.166293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.296 [2024-07-24 20:08:55.166332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.296 qpair failed and we were unable to recover it. 00:29:07.296 [2024-07-24 20:08:55.166824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.296 [2024-07-24 20:08:55.166855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.296 qpair failed and we were unable to recover it. 00:29:07.296 [2024-07-24 20:08:55.167125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.296 [2024-07-24 20:08:55.167154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.296 qpair failed and we were unable to recover it. 00:29:07.296 [2024-07-24 20:08:55.167648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.296 [2024-07-24 20:08:55.167680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.296 qpair failed and we were unable to recover it. 00:29:07.296 [2024-07-24 20:08:55.168154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.296 [2024-07-24 20:08:55.168186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.296 qpair failed and we were unable to recover it. 00:29:07.296 [2024-07-24 20:08:55.168665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.296 [2024-07-24 20:08:55.168696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.296 qpair failed and we were unable to recover it. 00:29:07.296 [2024-07-24 20:08:55.169173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.296 [2024-07-24 20:08:55.169220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.296 qpair failed and we were unable to recover it. 00:29:07.296 [2024-07-24 20:08:55.169570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.296 [2024-07-24 20:08:55.169600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.296 qpair failed and we were unable to recover it. 00:29:07.296 [2024-07-24 20:08:55.170081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.296 [2024-07-24 20:08:55.170112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.296 qpair failed and we were unable to recover it. 00:29:07.296 [2024-07-24 20:08:55.170589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.296 [2024-07-24 20:08:55.170620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.296 qpair failed and we were unable to recover it. 00:29:07.296 [2024-07-24 20:08:55.171090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.296 [2024-07-24 20:08:55.171123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.296 qpair failed and we were unable to recover it. 00:29:07.296 [2024-07-24 20:08:55.171587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.296 [2024-07-24 20:08:55.171618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.296 qpair failed and we were unable to recover it. 00:29:07.296 [2024-07-24 20:08:55.172096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.296 [2024-07-24 20:08:55.172127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.296 qpair failed and we were unable to recover it. 00:29:07.296 [2024-07-24 20:08:55.172647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.296 [2024-07-24 20:08:55.172681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.296 qpair failed and we were unable to recover it. 00:29:07.296 [2024-07-24 20:08:55.173154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.296 [2024-07-24 20:08:55.173185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.296 qpair failed and we were unable to recover it. 00:29:07.296 [2024-07-24 20:08:55.173663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.296 [2024-07-24 20:08:55.173695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.296 qpair failed and we were unable to recover it. 00:29:07.296 [2024-07-24 20:08:55.174165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.296 [2024-07-24 20:08:55.174198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.297 qpair failed and we were unable to recover it. 00:29:07.297 [2024-07-24 20:08:55.174694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.297 [2024-07-24 20:08:55.174725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.297 qpair failed and we were unable to recover it. 00:29:07.297 [2024-07-24 20:08:55.175125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.297 [2024-07-24 20:08:55.175156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.297 qpair failed and we were unable to recover it. 00:29:07.297 [2024-07-24 20:08:55.175700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.297 [2024-07-24 20:08:55.175732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.297 qpair failed and we were unable to recover it. 00:29:07.297 [2024-07-24 20:08:55.176216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.297 [2024-07-24 20:08:55.176248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.297 qpair failed and we were unable to recover it. 00:29:07.297 [2024-07-24 20:08:55.176750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.297 [2024-07-24 20:08:55.176779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.297 qpair failed and we were unable to recover it. 00:29:07.297 [2024-07-24 20:08:55.177270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.297 [2024-07-24 20:08:55.177302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.297 qpair failed and we were unable to recover it. 00:29:07.297 [2024-07-24 20:08:55.177764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.297 [2024-07-24 20:08:55.177793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.297 qpair failed and we were unable to recover it. 00:29:07.297 [2024-07-24 20:08:55.178400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.297 [2024-07-24 20:08:55.178500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.297 qpair failed and we were unable to recover it. 00:29:07.297 [2024-07-24 20:08:55.179013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.297 [2024-07-24 20:08:55.179050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.297 qpair failed and we were unable to recover it. 00:29:07.297 [2024-07-24 20:08:55.179500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.297 [2024-07-24 20:08:55.179546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.297 qpair failed and we were unable to recover it. 00:29:07.297 [2024-07-24 20:08:55.179987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.297 [2024-07-24 20:08:55.180018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.297 qpair failed and we were unable to recover it. 00:29:07.297 [2024-07-24 20:08:55.180540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.297 [2024-07-24 20:08:55.180572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.297 qpair failed and we were unable to recover it. 00:29:07.297 [2024-07-24 20:08:55.181057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.297 [2024-07-24 20:08:55.181088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.297 qpair failed and we were unable to recover it. 00:29:07.297 [2024-07-24 20:08:55.181578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.297 [2024-07-24 20:08:55.181612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.297 qpair failed and we were unable to recover it. 00:29:07.297 [2024-07-24 20:08:55.182095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.297 [2024-07-24 20:08:55.182126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.297 qpair failed and we were unable to recover it. 00:29:07.297 [2024-07-24 20:08:55.182602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.297 [2024-07-24 20:08:55.182634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.297 qpair failed and we were unable to recover it. 00:29:07.297 [2024-07-24 20:08:55.183121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.297 [2024-07-24 20:08:55.183152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.297 qpair failed and we were unable to recover it. 00:29:07.297 [2024-07-24 20:08:55.183687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.297 [2024-07-24 20:08:55.183720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.297 qpair failed and we were unable to recover it. 00:29:07.297 [2024-07-24 20:08:55.184199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.297 [2024-07-24 20:08:55.184242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.297 qpair failed and we were unable to recover it. 00:29:07.297 [2024-07-24 20:08:55.184532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.297 [2024-07-24 20:08:55.184562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.297 qpair failed and we were unable to recover it. 00:29:07.297 [2024-07-24 20:08:55.185077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.297 [2024-07-24 20:08:55.185108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.297 qpair failed and we were unable to recover it. 00:29:07.297 [2024-07-24 20:08:55.185638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.297 [2024-07-24 20:08:55.185670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.297 qpair failed and we were unable to recover it. 00:29:07.297 [2024-07-24 20:08:55.186031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.297 [2024-07-24 20:08:55.186066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.297 qpair failed and we were unable to recover it. 00:29:07.297 [2024-07-24 20:08:55.186558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.297 [2024-07-24 20:08:55.186590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.297 qpair failed and we were unable to recover it. 00:29:07.297 [2024-07-24 20:08:55.187088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.297 [2024-07-24 20:08:55.187119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.297 qpair failed and we were unable to recover it. 00:29:07.297 [2024-07-24 20:08:55.187442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.297 [2024-07-24 20:08:55.187472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.297 qpair failed and we were unable to recover it. 00:29:07.297 [2024-07-24 20:08:55.187996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.297 [2024-07-24 20:08:55.188025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.297 qpair failed and we were unable to recover it. 00:29:07.297 [2024-07-24 20:08:55.188351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.297 [2024-07-24 20:08:55.188384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.297 qpair failed and we were unable to recover it. 00:29:07.297 [2024-07-24 20:08:55.188868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.297 [2024-07-24 20:08:55.188899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.297 qpair failed and we were unable to recover it. 00:29:07.297 [2024-07-24 20:08:55.189306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.297 [2024-07-24 20:08:55.189338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.297 qpair failed and we were unable to recover it. 00:29:07.297 [2024-07-24 20:08:55.189806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.297 [2024-07-24 20:08:55.189837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.297 qpair failed and we were unable to recover it. 00:29:07.297 [2024-07-24 20:08:55.190088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.297 [2024-07-24 20:08:55.190117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.297 qpair failed and we were unable to recover it. 00:29:07.297 [2024-07-24 20:08:55.190575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.297 [2024-07-24 20:08:55.190607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.297 qpair failed and we were unable to recover it. 00:29:07.297 [2024-07-24 20:08:55.191083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.297 [2024-07-24 20:08:55.191113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.297 qpair failed and we were unable to recover it. 00:29:07.297 [2024-07-24 20:08:55.191585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.297 [2024-07-24 20:08:55.191617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.297 qpair failed and we were unable to recover it. 00:29:07.297 [2024-07-24 20:08:55.191884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.297 [2024-07-24 20:08:55.191914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.297 qpair failed and we were unable to recover it. 00:29:07.297 [2024-07-24 20:08:55.192277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.297 [2024-07-24 20:08:55.192309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.297 qpair failed and we were unable to recover it. 00:29:07.297 [2024-07-24 20:08:55.192788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.298 [2024-07-24 20:08:55.192819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.298 qpair failed and we were unable to recover it. 00:29:07.298 [2024-07-24 20:08:55.193308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.298 [2024-07-24 20:08:55.193340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.298 qpair failed and we were unable to recover it. 00:29:07.298 [2024-07-24 20:08:55.193818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.298 [2024-07-24 20:08:55.193849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.298 qpair failed and we were unable to recover it. 00:29:07.298 [2024-07-24 20:08:55.194341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.298 [2024-07-24 20:08:55.194372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.298 qpair failed and we were unable to recover it. 00:29:07.298 [2024-07-24 20:08:55.194738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.298 [2024-07-24 20:08:55.194780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.298 qpair failed and we were unable to recover it. 00:29:07.298 [2024-07-24 20:08:55.195262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.298 [2024-07-24 20:08:55.195294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.298 qpair failed and we were unable to recover it. 00:29:07.298 [2024-07-24 20:08:55.195773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.298 [2024-07-24 20:08:55.195804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.298 qpair failed and we were unable to recover it. 00:29:07.298 [2024-07-24 20:08:55.196304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.298 [2024-07-24 20:08:55.196336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.298 qpair failed and we were unable to recover it. 00:29:07.298 [2024-07-24 20:08:55.196604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.298 [2024-07-24 20:08:55.196634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.298 qpair failed and we were unable to recover it. 00:29:07.298 [2024-07-24 20:08:55.197118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.298 [2024-07-24 20:08:55.197148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.298 qpair failed and we were unable to recover it. 00:29:07.298 [2024-07-24 20:08:55.197530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.298 [2024-07-24 20:08:55.197562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.298 qpair failed and we were unable to recover it. 00:29:07.298 [2024-07-24 20:08:55.197926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.298 [2024-07-24 20:08:55.197959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.298 qpair failed and we were unable to recover it. 00:29:07.298 [2024-07-24 20:08:55.198339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.298 [2024-07-24 20:08:55.198388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.298 qpair failed and we were unable to recover it. 00:29:07.298 [2024-07-24 20:08:55.198882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.298 [2024-07-24 20:08:55.198915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.298 qpair failed and we were unable to recover it. 00:29:07.298 [2024-07-24 20:08:55.199286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.298 [2024-07-24 20:08:55.199326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.298 qpair failed and we were unable to recover it. 00:29:07.298 [2024-07-24 20:08:55.199810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.298 [2024-07-24 20:08:55.199841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.298 qpair failed and we were unable to recover it. 00:29:07.298 [2024-07-24 20:08:55.200317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.298 [2024-07-24 20:08:55.200350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.298 qpair failed and we were unable to recover it. 00:29:07.298 [2024-07-24 20:08:55.200722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.298 [2024-07-24 20:08:55.200753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.298 qpair failed and we were unable to recover it. 00:29:07.298 [2024-07-24 20:08:55.201134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.298 [2024-07-24 20:08:55.201164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.298 qpair failed and we were unable to recover it. 00:29:07.298 [2024-07-24 20:08:55.201554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.298 [2024-07-24 20:08:55.201585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.298 qpair failed and we were unable to recover it. 00:29:07.298 [2024-07-24 20:08:55.201845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.298 [2024-07-24 20:08:55.201876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.298 qpair failed and we were unable to recover it. 00:29:07.298 [2024-07-24 20:08:55.202358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.298 [2024-07-24 20:08:55.202392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.298 qpair failed and we were unable to recover it. 00:29:07.298 [2024-07-24 20:08:55.202658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.298 [2024-07-24 20:08:55.202688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.298 qpair failed and we were unable to recover it. 00:29:07.298 [2024-07-24 20:08:55.203177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.298 [2024-07-24 20:08:55.203215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.298 qpair failed and we were unable to recover it. 00:29:07.298 [2024-07-24 20:08:55.203576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.298 [2024-07-24 20:08:55.203606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.298 qpair failed and we were unable to recover it. 00:29:07.298 [2024-07-24 20:08:55.204001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.298 [2024-07-24 20:08:55.204033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.298 qpair failed and we were unable to recover it. 00:29:07.298 [2024-07-24 20:08:55.204527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.298 [2024-07-24 20:08:55.204560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.298 qpair failed and we were unable to recover it. 00:29:07.298 [2024-07-24 20:08:55.204810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.298 [2024-07-24 20:08:55.204840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.298 qpair failed and we were unable to recover it. 00:29:07.298 [2024-07-24 20:08:55.205347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.298 [2024-07-24 20:08:55.205379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.298 qpair failed and we were unable to recover it. 00:29:07.298 [2024-07-24 20:08:55.205880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.298 [2024-07-24 20:08:55.205910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.298 qpair failed and we were unable to recover it. 00:29:07.298 [2024-07-24 20:08:55.206180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.298 [2024-07-24 20:08:55.206231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.298 qpair failed and we were unable to recover it. 00:29:07.298 [2024-07-24 20:08:55.206722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.298 [2024-07-24 20:08:55.206753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.298 qpair failed and we were unable to recover it. 00:29:07.298 [2024-07-24 20:08:55.207236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.298 [2024-07-24 20:08:55.207269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.298 qpair failed and we were unable to recover it. 00:29:07.298 [2024-07-24 20:08:55.207788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.298 [2024-07-24 20:08:55.207819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.298 qpair failed and we were unable to recover it. 00:29:07.298 [2024-07-24 20:08:55.208211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.298 [2024-07-24 20:08:55.208245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.298 qpair failed and we were unable to recover it. 00:29:07.298 [2024-07-24 20:08:55.208756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.298 [2024-07-24 20:08:55.208788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.298 qpair failed and we were unable to recover it. 00:29:07.298 [2024-07-24 20:08:55.209191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.298 [2024-07-24 20:08:55.209229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.298 qpair failed and we were unable to recover it. 00:29:07.298 [2024-07-24 20:08:55.209537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.298 [2024-07-24 20:08:55.209569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.299 qpair failed and we were unable to recover it. 00:29:07.299 [2024-07-24 20:08:55.210143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.299 [2024-07-24 20:08:55.210174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.299 qpair failed and we were unable to recover it. 00:29:07.299 [2024-07-24 20:08:55.210665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.299 [2024-07-24 20:08:55.210698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.299 qpair failed and we were unable to recover it. 00:29:07.299 [2024-07-24 20:08:55.211083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.299 [2024-07-24 20:08:55.211114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.299 qpair failed and we were unable to recover it. 00:29:07.299 [2024-07-24 20:08:55.211571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.299 [2024-07-24 20:08:55.211604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.299 qpair failed and we were unable to recover it. 00:29:07.299 [2024-07-24 20:08:55.212080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.299 [2024-07-24 20:08:55.212110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.299 qpair failed and we were unable to recover it. 00:29:07.299 [2024-07-24 20:08:55.212578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.299 [2024-07-24 20:08:55.212610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.299 qpair failed and we were unable to recover it. 00:29:07.299 [2024-07-24 20:08:55.213098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.299 [2024-07-24 20:08:55.213130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.299 qpair failed and we were unable to recover it. 00:29:07.299 [2024-07-24 20:08:55.213409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.299 [2024-07-24 20:08:55.213440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.299 qpair failed and we were unable to recover it. 00:29:07.299 [2024-07-24 20:08:55.213911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.299 [2024-07-24 20:08:55.213940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.299 qpair failed and we were unable to recover it. 00:29:07.299 [2024-07-24 20:08:55.214433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.299 [2024-07-24 20:08:55.214464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.299 qpair failed and we were unable to recover it. 00:29:07.299 [2024-07-24 20:08:55.214940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.299 [2024-07-24 20:08:55.214970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.299 qpair failed and we were unable to recover it. 00:29:07.299 [2024-07-24 20:08:55.215459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.299 [2024-07-24 20:08:55.215490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.299 qpair failed and we were unable to recover it. 00:29:07.299 [2024-07-24 20:08:55.215977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.299 [2024-07-24 20:08:55.216009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.299 qpair failed and we were unable to recover it. 00:29:07.299 [2024-07-24 20:08:55.216591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.299 [2024-07-24 20:08:55.216694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.299 qpair failed and we were unable to recover it. 00:29:07.299 [2024-07-24 20:08:55.217175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.299 [2024-07-24 20:08:55.217244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.299 qpair failed and we were unable to recover it. 00:29:07.299 [2024-07-24 20:08:55.217618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.299 [2024-07-24 20:08:55.217659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.299 qpair failed and we were unable to recover it. 00:29:07.299 [2024-07-24 20:08:55.217939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.299 [2024-07-24 20:08:55.217970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.299 qpair failed and we were unable to recover it. 00:29:07.299 [2024-07-24 20:08:55.218248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.299 [2024-07-24 20:08:55.218282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.299 qpair failed and we were unable to recover it. 00:29:07.299 [2024-07-24 20:08:55.218745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.299 [2024-07-24 20:08:55.218776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.299 qpair failed and we were unable to recover it. 00:29:07.299 [2024-07-24 20:08:55.219406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.299 [2024-07-24 20:08:55.219509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.299 qpair failed and we were unable to recover it. 00:29:07.299 [2024-07-24 20:08:55.220053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.299 [2024-07-24 20:08:55.220092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.299 qpair failed and we were unable to recover it. 00:29:07.299 [2024-07-24 20:08:55.220367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.299 [2024-07-24 20:08:55.220402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.299 qpair failed and we were unable to recover it. 00:29:07.299 [2024-07-24 20:08:55.220891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.299 [2024-07-24 20:08:55.220923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.299 qpair failed and we were unable to recover it. 00:29:07.299 [2024-07-24 20:08:55.221416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.299 [2024-07-24 20:08:55.221448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.299 qpair failed and we were unable to recover it. 00:29:07.299 [2024-07-24 20:08:55.221936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.299 [2024-07-24 20:08:55.221967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.299 qpair failed and we were unable to recover it. 00:29:07.299 [2024-07-24 20:08:55.222477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.299 [2024-07-24 20:08:55.222510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.299 qpair failed and we were unable to recover it. 00:29:07.299 [2024-07-24 20:08:55.222976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.299 [2024-07-24 20:08:55.223007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.299 qpair failed and we were unable to recover it. 00:29:07.299 [2024-07-24 20:08:55.223489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.299 [2024-07-24 20:08:55.223592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.299 qpair failed and we were unable to recover it. 00:29:07.299 [2024-07-24 20:08:55.224151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.299 [2024-07-24 20:08:55.224189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.299 qpair failed and we were unable to recover it. 00:29:07.299 [2024-07-24 20:08:55.224693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.299 [2024-07-24 20:08:55.224726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.299 qpair failed and we were unable to recover it. 00:29:07.299 [2024-07-24 20:08:55.225208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.299 [2024-07-24 20:08:55.225239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.299 qpair failed and we were unable to recover it. 00:29:07.570 [2024-07-24 20:08:55.225584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.570 [2024-07-24 20:08:55.225619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.570 qpair failed and we were unable to recover it. 00:29:07.570 [2024-07-24 20:08:55.226169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.570 [2024-07-24 20:08:55.226220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.570 qpair failed and we were unable to recover it. 00:29:07.570 [2024-07-24 20:08:55.226694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.570 [2024-07-24 20:08:55.226724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.570 qpair failed and we were unable to recover it. 00:29:07.570 [2024-07-24 20:08:55.227191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.570 [2024-07-24 20:08:55.227236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.570 qpair failed and we were unable to recover it. 00:29:07.570 [2024-07-24 20:08:55.227748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.570 [2024-07-24 20:08:55.227781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.570 qpair failed and we were unable to recover it. 00:29:07.570 [2024-07-24 20:08:55.228449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.570 [2024-07-24 20:08:55.228555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.570 qpair failed and we were unable to recover it. 00:29:07.570 [2024-07-24 20:08:55.229155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.570 [2024-07-24 20:08:55.229192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.570 qpair failed and we were unable to recover it. 00:29:07.570 [2024-07-24 20:08:55.229702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.570 [2024-07-24 20:08:55.229735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.570 qpair failed and we were unable to recover it. 00:29:07.570 [2024-07-24 20:08:55.230193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.570 [2024-07-24 20:08:55.230235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.570 qpair failed and we were unable to recover it. 00:29:07.570 [2024-07-24 20:08:55.230596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.570 [2024-07-24 20:08:55.230631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.570 qpair failed and we were unable to recover it. 00:29:07.570 [2024-07-24 20:08:55.231128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.570 [2024-07-24 20:08:55.231163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.570 qpair failed and we were unable to recover it. 00:29:07.570 [2024-07-24 20:08:55.231802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.570 [2024-07-24 20:08:55.231908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.570 qpair failed and we were unable to recover it. 00:29:07.570 [2024-07-24 20:08:55.232562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.570 [2024-07-24 20:08:55.232668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.570 qpair failed and we were unable to recover it. 00:29:07.570 [2024-07-24 20:08:55.233231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.570 [2024-07-24 20:08:55.233271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.570 qpair failed and we were unable to recover it. 00:29:07.570 [2024-07-24 20:08:55.233768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.570 [2024-07-24 20:08:55.233800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.570 qpair failed and we were unable to recover it. 00:29:07.570 [2024-07-24 20:08:55.234404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.570 [2024-07-24 20:08:55.234511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.570 qpair failed and we were unable to recover it. 00:29:07.570 [2024-07-24 20:08:55.235119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.570 [2024-07-24 20:08:55.235159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.570 qpair failed and we were unable to recover it. 00:29:07.570 [2024-07-24 20:08:55.235663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.570 [2024-07-24 20:08:55.235696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.570 qpair failed and we were unable to recover it. 00:29:07.570 [2024-07-24 20:08:55.236185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.570 [2024-07-24 20:08:55.236230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.570 qpair failed and we were unable to recover it. 00:29:07.570 [2024-07-24 20:08:55.236731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.570 [2024-07-24 20:08:55.236763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.570 qpair failed and we were unable to recover it. 00:29:07.570 [2024-07-24 20:08:55.237042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.570 [2024-07-24 20:08:55.237073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.570 qpair failed and we were unable to recover it. 00:29:07.570 [2024-07-24 20:08:55.237538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.570 [2024-07-24 20:08:55.237570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.570 qpair failed and we were unable to recover it. 00:29:07.570 [2024-07-24 20:08:55.238054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.570 [2024-07-24 20:08:55.238086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.570 qpair failed and we were unable to recover it. 00:29:07.570 [2024-07-24 20:08:55.238563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.570 [2024-07-24 20:08:55.238607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.570 qpair failed and we were unable to recover it. 00:29:07.570 [2024-07-24 20:08:55.239097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.570 [2024-07-24 20:08:55.239130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.570 qpair failed and we were unable to recover it. 00:29:07.570 [2024-07-24 20:08:55.239599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.570 [2024-07-24 20:08:55.239632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.570 qpair failed and we were unable to recover it. 00:29:07.570 [2024-07-24 20:08:55.240100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.570 [2024-07-24 20:08:55.240132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.570 qpair failed and we were unable to recover it. 00:29:07.570 [2024-07-24 20:08:55.240598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.570 [2024-07-24 20:08:55.240631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.570 qpair failed and we were unable to recover it. 00:29:07.570 [2024-07-24 20:08:55.241126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.570 [2024-07-24 20:08:55.241158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.570 qpair failed and we were unable to recover it. 00:29:07.570 [2024-07-24 20:08:55.241621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.570 [2024-07-24 20:08:55.241656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.570 qpair failed and we were unable to recover it. 00:29:07.570 [2024-07-24 20:08:55.242142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.570 [2024-07-24 20:08:55.242173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.570 qpair failed and we were unable to recover it. 00:29:07.570 [2024-07-24 20:08:55.242543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.570 [2024-07-24 20:08:55.242577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.570 qpair failed and we were unable to recover it. 00:29:07.570 [2024-07-24 20:08:55.242832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.570 [2024-07-24 20:08:55.242862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.570 qpair failed and we were unable to recover it. 00:29:07.570 [2024-07-24 20:08:55.243332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.570 [2024-07-24 20:08:55.243363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.570 qpair failed and we were unable to recover it. 00:29:07.571 [2024-07-24 20:08:55.243856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.571 [2024-07-24 20:08:55.243888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.571 qpair failed and we were unable to recover it. 00:29:07.571 [2024-07-24 20:08:55.244370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.571 [2024-07-24 20:08:55.244402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.571 qpair failed and we were unable to recover it. 00:29:07.571 [2024-07-24 20:08:55.244680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.571 [2024-07-24 20:08:55.244710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.571 qpair failed and we were unable to recover it. 00:29:07.571 [2024-07-24 20:08:55.245258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.571 [2024-07-24 20:08:55.245290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.571 qpair failed and we were unable to recover it. 00:29:07.571 [2024-07-24 20:08:55.245784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.571 [2024-07-24 20:08:55.245815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.571 qpair failed and we were unable to recover it. 00:29:07.571 [2024-07-24 20:08:55.246294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.571 [2024-07-24 20:08:55.246326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.571 qpair failed and we were unable to recover it. 00:29:07.571 [2024-07-24 20:08:55.246768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.571 [2024-07-24 20:08:55.246798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.571 qpair failed and we were unable to recover it. 00:29:07.571 [2024-07-24 20:08:55.247160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.571 [2024-07-24 20:08:55.247191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.571 qpair failed and we were unable to recover it. 00:29:07.571 [2024-07-24 20:08:55.247694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.571 [2024-07-24 20:08:55.247727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.571 qpair failed and we were unable to recover it. 00:29:07.571 [2024-07-24 20:08:55.248218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.571 [2024-07-24 20:08:55.248251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.571 qpair failed and we were unable to recover it. 00:29:07.571 [2024-07-24 20:08:55.248781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.571 [2024-07-24 20:08:55.248811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.571 qpair failed and we were unable to recover it. 00:29:07.571 [2024-07-24 20:08:55.249179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.571 [2024-07-24 20:08:55.249221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.571 qpair failed and we were unable to recover it. 00:29:07.571 [2024-07-24 20:08:55.249689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.571 [2024-07-24 20:08:55.249719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.571 qpair failed and we were unable to recover it. 00:29:07.571 [2024-07-24 20:08:55.250215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.571 [2024-07-24 20:08:55.250247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.571 qpair failed and we were unable to recover it. 00:29:07.571 [2024-07-24 20:08:55.250772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.571 [2024-07-24 20:08:55.250804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.571 qpair failed and we were unable to recover it. 00:29:07.571 [2024-07-24 20:08:55.251364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.571 [2024-07-24 20:08:55.251397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.571 qpair failed and we were unable to recover it. 00:29:07.571 [2024-07-24 20:08:55.251765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.571 [2024-07-24 20:08:55.251798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.571 qpair failed and we were unable to recover it. 00:29:07.571 [2024-07-24 20:08:55.252410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.571 [2024-07-24 20:08:55.252517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.571 qpair failed and we were unable to recover it. 00:29:07.571 [2024-07-24 20:08:55.253085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.571 [2024-07-24 20:08:55.253123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.571 qpair failed and we were unable to recover it. 00:29:07.571 [2024-07-24 20:08:55.253610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.571 [2024-07-24 20:08:55.253643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.571 qpair failed and we were unable to recover it. 00:29:07.571 [2024-07-24 20:08:55.253974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.571 [2024-07-24 20:08:55.254009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.571 qpair failed and we were unable to recover it. 00:29:07.571 [2024-07-24 20:08:55.254490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.571 [2024-07-24 20:08:55.254522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.571 qpair failed and we were unable to recover it. 00:29:07.571 [2024-07-24 20:08:55.254916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.571 [2024-07-24 20:08:55.254947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.571 qpair failed and we were unable to recover it. 00:29:07.571 [2024-07-24 20:08:55.255425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.571 [2024-07-24 20:08:55.255460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.571 qpair failed and we were unable to recover it. 00:29:07.571 [2024-07-24 20:08:55.255830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.571 [2024-07-24 20:08:55.255860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.571 qpair failed and we were unable to recover it. 00:29:07.571 [2024-07-24 20:08:55.256345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.571 [2024-07-24 20:08:55.256376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.571 qpair failed and we were unable to recover it. 00:29:07.571 [2024-07-24 20:08:55.256871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.571 [2024-07-24 20:08:55.256901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.571 qpair failed and we were unable to recover it. 00:29:07.571 [2024-07-24 20:08:55.257391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.571 [2024-07-24 20:08:55.257424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.571 qpair failed and we were unable to recover it. 00:29:07.571 [2024-07-24 20:08:55.257929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.571 [2024-07-24 20:08:55.257961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.571 qpair failed and we were unable to recover it. 00:29:07.571 [2024-07-24 20:08:55.258239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.571 [2024-07-24 20:08:55.258281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.571 qpair failed and we were unable to recover it. 00:29:07.571 [2024-07-24 20:08:55.258725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.571 [2024-07-24 20:08:55.258756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.571 qpair failed and we were unable to recover it. 00:29:07.571 [2024-07-24 20:08:55.259011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.571 [2024-07-24 20:08:55.259040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.571 qpair failed and we were unable to recover it. 00:29:07.571 [2024-07-24 20:08:55.259531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.571 [2024-07-24 20:08:55.259565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.571 qpair failed and we were unable to recover it. 00:29:07.571 [2024-07-24 20:08:55.260048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.571 [2024-07-24 20:08:55.260079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.571 qpair failed and we were unable to recover it. 00:29:07.571 [2024-07-24 20:08:55.260578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.571 [2024-07-24 20:08:55.260610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.571 qpair failed and we were unable to recover it. 00:29:07.571 [2024-07-24 20:08:55.260880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.571 [2024-07-24 20:08:55.260909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.571 qpair failed and we were unable to recover it. 00:29:07.571 [2024-07-24 20:08:55.261415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.571 [2024-07-24 20:08:55.261446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.571 qpair failed and we were unable to recover it. 00:29:07.571 [2024-07-24 20:08:55.261928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.572 [2024-07-24 20:08:55.261959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.572 qpair failed and we were unable to recover it. 00:29:07.572 [2024-07-24 20:08:55.262458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.572 [2024-07-24 20:08:55.262489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.572 qpair failed and we were unable to recover it. 00:29:07.572 [2024-07-24 20:08:55.262958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.572 [2024-07-24 20:08:55.262989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.572 qpair failed and we were unable to recover it. 00:29:07.572 [2024-07-24 20:08:55.263493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.572 [2024-07-24 20:08:55.263525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.572 qpair failed and we were unable to recover it. 00:29:07.572 [2024-07-24 20:08:55.264010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.572 [2024-07-24 20:08:55.264041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.572 qpair failed and we were unable to recover it. 00:29:07.572 [2024-07-24 20:08:55.264415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.572 [2024-07-24 20:08:55.264459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.572 qpair failed and we were unable to recover it. 00:29:07.572 [2024-07-24 20:08:55.264986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.572 [2024-07-24 20:08:55.265018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.572 qpair failed and we were unable to recover it. 00:29:07.572 [2024-07-24 20:08:55.265380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.572 [2024-07-24 20:08:55.265413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.572 qpair failed and we were unable to recover it. 00:29:07.572 [2024-07-24 20:08:55.265886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.572 [2024-07-24 20:08:55.265917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.572 qpair failed and we were unable to recover it. 00:29:07.572 [2024-07-24 20:08:55.266418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.572 [2024-07-24 20:08:55.266450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.572 qpair failed and we were unable to recover it. 00:29:07.572 [2024-07-24 20:08:55.266939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.572 [2024-07-24 20:08:55.266970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.572 qpair failed and we were unable to recover it. 00:29:07.572 [2024-07-24 20:08:55.267456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.572 [2024-07-24 20:08:55.267487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.572 qpair failed and we were unable to recover it. 00:29:07.572 [2024-07-24 20:08:55.267968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.572 [2024-07-24 20:08:55.268000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.572 qpair failed and we were unable to recover it. 00:29:07.572 [2024-07-24 20:08:55.268592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.572 [2024-07-24 20:08:55.268698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.572 qpair failed and we were unable to recover it. 00:29:07.572 [2024-07-24 20:08:55.269071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.572 [2024-07-24 20:08:55.269110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.572 qpair failed and we were unable to recover it. 00:29:07.572 [2024-07-24 20:08:55.269626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.572 [2024-07-24 20:08:55.269660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.572 qpair failed and we were unable to recover it. 00:29:07.572 [2024-07-24 20:08:55.270148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.572 [2024-07-24 20:08:55.270180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.572 qpair failed and we were unable to recover it. 00:29:07.572 [2024-07-24 20:08:55.270707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.572 [2024-07-24 20:08:55.270740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.572 qpair failed and we were unable to recover it. 00:29:07.572 [2024-07-24 20:08:55.271121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.572 [2024-07-24 20:08:55.271156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.572 qpair failed and we were unable to recover it. 00:29:07.572 [2024-07-24 20:08:55.271723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.572 [2024-07-24 20:08:55.271758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.572 qpair failed and we were unable to recover it. 00:29:07.572 [2024-07-24 20:08:55.272008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.572 [2024-07-24 20:08:55.272038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.572 qpair failed and we were unable to recover it. 00:29:07.572 [2024-07-24 20:08:55.272438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.572 [2024-07-24 20:08:55.272470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.572 qpair failed and we were unable to recover it. 00:29:07.572 [2024-07-24 20:08:55.272834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.572 [2024-07-24 20:08:55.272865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.572 qpair failed and we were unable to recover it. 00:29:07.572 [2024-07-24 20:08:55.273360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.572 [2024-07-24 20:08:55.273393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.572 qpair failed and we were unable to recover it. 00:29:07.572 [2024-07-24 20:08:55.273676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.572 [2024-07-24 20:08:55.273706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.572 qpair failed and we were unable to recover it. 00:29:07.572 [2024-07-24 20:08:55.274223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.572 [2024-07-24 20:08:55.274256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.572 qpair failed and we were unable to recover it. 00:29:07.572 [2024-07-24 20:08:55.274740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.572 [2024-07-24 20:08:55.274770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.572 qpair failed and we were unable to recover it. 00:29:07.572 [2024-07-24 20:08:55.275034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.572 [2024-07-24 20:08:55.275063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.572 qpair failed and we were unable to recover it. 00:29:07.572 [2024-07-24 20:08:55.275519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.572 [2024-07-24 20:08:55.275550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.572 qpair failed and we were unable to recover it. 00:29:07.572 [2024-07-24 20:08:55.276007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.572 [2024-07-24 20:08:55.276038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.572 qpair failed and we were unable to recover it. 00:29:07.572 [2024-07-24 20:08:55.276514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.572 [2024-07-24 20:08:55.276546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.572 qpair failed and we were unable to recover it. 00:29:07.572 [2024-07-24 20:08:55.277038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.572 [2024-07-24 20:08:55.277069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.572 qpair failed and we were unable to recover it. 00:29:07.572 [2024-07-24 20:08:55.277488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.572 [2024-07-24 20:08:55.277527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.572 qpair failed and we were unable to recover it. 00:29:07.572 [2024-07-24 20:08:55.277953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.572 [2024-07-24 20:08:55.277984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.572 qpair failed and we were unable to recover it. 00:29:07.572 [2024-07-24 20:08:55.278463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.572 [2024-07-24 20:08:55.278495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.572 qpair failed and we were unable to recover it. 00:29:07.572 [2024-07-24 20:08:55.278996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.572 [2024-07-24 20:08:55.279027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.572 qpair failed and we were unable to recover it. 00:29:07.572 [2024-07-24 20:08:55.279509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.572 [2024-07-24 20:08:55.279541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.572 qpair failed and we were unable to recover it. 00:29:07.572 [2024-07-24 20:08:55.280037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.572 [2024-07-24 20:08:55.280067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.572 qpair failed and we were unable to recover it. 00:29:07.573 [2024-07-24 20:08:55.280446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.573 [2024-07-24 20:08:55.280479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.573 qpair failed and we were unable to recover it. 00:29:07.573 [2024-07-24 20:08:55.280964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.573 [2024-07-24 20:08:55.280994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.573 qpair failed and we were unable to recover it. 00:29:07.573 [2024-07-24 20:08:55.281480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.573 [2024-07-24 20:08:55.281512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.573 qpair failed and we were unable to recover it. 00:29:07.573 [2024-07-24 20:08:55.282005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.573 [2024-07-24 20:08:55.282036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.573 qpair failed and we were unable to recover it. 00:29:07.573 [2024-07-24 20:08:55.282386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.573 [2024-07-24 20:08:55.282417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.573 qpair failed and we were unable to recover it. 00:29:07.573 [2024-07-24 20:08:55.282826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.573 [2024-07-24 20:08:55.282857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.573 qpair failed and we were unable to recover it. 00:29:07.573 [2024-07-24 20:08:55.283331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.573 [2024-07-24 20:08:55.283369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.573 qpair failed and we were unable to recover it. 00:29:07.573 [2024-07-24 20:08:55.283859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.573 [2024-07-24 20:08:55.283891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.573 qpair failed and we were unable to recover it. 00:29:07.573 [2024-07-24 20:08:55.284385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.573 [2024-07-24 20:08:55.284417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.573 qpair failed and we were unable to recover it. 00:29:07.573 [2024-07-24 20:08:55.284950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.573 [2024-07-24 20:08:55.284981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.573 qpair failed and we were unable to recover it. 00:29:07.573 [2024-07-24 20:08:55.285459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.573 [2024-07-24 20:08:55.285491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.573 qpair failed and we were unable to recover it. 00:29:07.573 [2024-07-24 20:08:55.285991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.573 [2024-07-24 20:08:55.286022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.573 qpair failed and we were unable to recover it. 00:29:07.573 [2024-07-24 20:08:55.286619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.573 [2024-07-24 20:08:55.286725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.573 qpair failed and we were unable to recover it. 00:29:07.573 [2024-07-24 20:08:55.287405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.573 [2024-07-24 20:08:55.287509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.573 qpair failed and we were unable to recover it. 00:29:07.573 [2024-07-24 20:08:55.288065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.573 [2024-07-24 20:08:55.288103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.573 qpair failed and we were unable to recover it. 00:29:07.573 [2024-07-24 20:08:55.288584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.573 [2024-07-24 20:08:55.288619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.573 qpair failed and we were unable to recover it. 00:29:07.573 [2024-07-24 20:08:55.289095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.573 [2024-07-24 20:08:55.289127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.573 qpair failed and we were unable to recover it. 00:29:07.573 [2024-07-24 20:08:55.289383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.573 [2024-07-24 20:08:55.289416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.573 qpair failed and we were unable to recover it. 00:29:07.573 [2024-07-24 20:08:55.289895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.573 [2024-07-24 20:08:55.289928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.573 qpair failed and we were unable to recover it. 00:29:07.573 [2024-07-24 20:08:55.290419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.573 [2024-07-24 20:08:55.290452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.573 qpair failed and we were unable to recover it. 00:29:07.573 [2024-07-24 20:08:55.290936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.573 [2024-07-24 20:08:55.290967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.573 qpair failed and we were unable to recover it. 00:29:07.573 [2024-07-24 20:08:55.291395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.573 [2024-07-24 20:08:55.291432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.573 qpair failed and we were unable to recover it. 00:29:07.573 [2024-07-24 20:08:55.291910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.573 [2024-07-24 20:08:55.291942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.573 qpair failed and we were unable to recover it. 00:29:07.573 [2024-07-24 20:08:55.292434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.573 [2024-07-24 20:08:55.292469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.573 qpair failed and we were unable to recover it. 00:29:07.573 [2024-07-24 20:08:55.292949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.573 [2024-07-24 20:08:55.292980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.573 qpair failed and we were unable to recover it. 00:29:07.573 [2024-07-24 20:08:55.293249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.573 [2024-07-24 20:08:55.293282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.573 qpair failed and we were unable to recover it. 00:29:07.573 [2024-07-24 20:08:55.293784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.573 [2024-07-24 20:08:55.293817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.573 qpair failed and we were unable to recover it. 00:29:07.573 [2024-07-24 20:08:55.294075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.573 [2024-07-24 20:08:55.294108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.573 qpair failed and we were unable to recover it. 00:29:07.573 [2024-07-24 20:08:55.294604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.573 [2024-07-24 20:08:55.294639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.573 qpair failed and we were unable to recover it. 00:29:07.573 [2024-07-24 20:08:55.294976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.573 [2024-07-24 20:08:55.295009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.573 qpair failed and we were unable to recover it. 00:29:07.573 [2024-07-24 20:08:55.295544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.573 [2024-07-24 20:08:55.295578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.573 qpair failed and we were unable to recover it. 00:29:07.573 [2024-07-24 20:08:55.296062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.573 [2024-07-24 20:08:55.296093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.573 qpair failed and we were unable to recover it. 00:29:07.573 [2024-07-24 20:08:55.296565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.573 [2024-07-24 20:08:55.296598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.573 qpair failed and we were unable to recover it. 00:29:07.573 [2024-07-24 20:08:55.296882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.573 [2024-07-24 20:08:55.296912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.573 qpair failed and we were unable to recover it. 00:29:07.573 [2024-07-24 20:08:55.297414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.573 [2024-07-24 20:08:55.297446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.573 qpair failed and we were unable to recover it. 00:29:07.573 [2024-07-24 20:08:55.297813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.573 [2024-07-24 20:08:55.297848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.573 qpair failed and we were unable to recover it. 00:29:07.573 [2024-07-24 20:08:55.298126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.573 [2024-07-24 20:08:55.298156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.573 qpair failed and we were unable to recover it. 00:29:07.574 [2024-07-24 20:08:55.298450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.574 [2024-07-24 20:08:55.298480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.574 qpair failed and we were unable to recover it. 00:29:07.574 [2024-07-24 20:08:55.298855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.574 [2024-07-24 20:08:55.298886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.574 qpair failed and we were unable to recover it. 00:29:07.574 [2024-07-24 20:08:55.299380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.574 [2024-07-24 20:08:55.299411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.574 qpair failed and we were unable to recover it. 00:29:07.574 [2024-07-24 20:08:55.299911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.574 [2024-07-24 20:08:55.299942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.574 qpair failed and we were unable to recover it. 00:29:07.574 [2024-07-24 20:08:55.300310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.574 [2024-07-24 20:08:55.300343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.574 qpair failed and we were unable to recover it. 00:29:07.574 [2024-07-24 20:08:55.300823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.574 [2024-07-24 20:08:55.300855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.574 qpair failed and we were unable to recover it. 00:29:07.574 [2024-07-24 20:08:55.301365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.574 [2024-07-24 20:08:55.301397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.574 qpair failed and we were unable to recover it. 00:29:07.574 [2024-07-24 20:08:55.301882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.574 [2024-07-24 20:08:55.301912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.574 qpair failed and we were unable to recover it. 00:29:07.574 [2024-07-24 20:08:55.302244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.574 [2024-07-24 20:08:55.302275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.574 qpair failed and we were unable to recover it. 00:29:07.574 [2024-07-24 20:08:55.302779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.574 [2024-07-24 20:08:55.302811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.574 qpair failed and we were unable to recover it. 00:29:07.574 [2024-07-24 20:08:55.303042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.574 [2024-07-24 20:08:55.303073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.574 qpair failed and we were unable to recover it. 00:29:07.574 [2024-07-24 20:08:55.303392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.574 [2024-07-24 20:08:55.303424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.574 qpair failed and we were unable to recover it. 00:29:07.574 [2024-07-24 20:08:55.303888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.574 [2024-07-24 20:08:55.303920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.574 qpair failed and we were unable to recover it. 00:29:07.574 [2024-07-24 20:08:55.304291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.574 [2024-07-24 20:08:55.304325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.574 qpair failed and we were unable to recover it. 00:29:07.574 [2024-07-24 20:08:55.304802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.574 [2024-07-24 20:08:55.304834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.574 qpair failed and we were unable to recover it. 00:29:07.574 [2024-07-24 20:08:55.305314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.574 [2024-07-24 20:08:55.305344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.574 qpair failed and we were unable to recover it. 00:29:07.574 [2024-07-24 20:08:55.305813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.574 [2024-07-24 20:08:55.305844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.574 qpair failed and we were unable to recover it. 00:29:07.574 [2024-07-24 20:08:55.306333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.574 [2024-07-24 20:08:55.306365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.574 qpair failed and we were unable to recover it. 00:29:07.574 [2024-07-24 20:08:55.306850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.574 [2024-07-24 20:08:55.306880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.574 qpair failed and we were unable to recover it. 00:29:07.574 [2024-07-24 20:08:55.307362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.574 [2024-07-24 20:08:55.307393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.574 qpair failed and we were unable to recover it. 00:29:07.574 [2024-07-24 20:08:55.307673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.574 [2024-07-24 20:08:55.307704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.574 qpair failed and we were unable to recover it. 00:29:07.574 [2024-07-24 20:08:55.308171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.574 [2024-07-24 20:08:55.308213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.574 qpair failed and we were unable to recover it. 00:29:07.574 [2024-07-24 20:08:55.308679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.574 [2024-07-24 20:08:55.308711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.574 qpair failed and we were unable to recover it. 00:29:07.574 [2024-07-24 20:08:55.309196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.574 [2024-07-24 20:08:55.309250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.574 qpair failed and we were unable to recover it. 00:29:07.574 [2024-07-24 20:08:55.309753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.574 [2024-07-24 20:08:55.309790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.574 qpair failed and we were unable to recover it. 00:29:07.574 [2024-07-24 20:08:55.310275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.574 [2024-07-24 20:08:55.310333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.574 qpair failed and we were unable to recover it. 00:29:07.574 [2024-07-24 20:08:55.310863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.574 [2024-07-24 20:08:55.310895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.574 qpair failed and we were unable to recover it. 00:29:07.574 [2024-07-24 20:08:55.311384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.574 [2024-07-24 20:08:55.311416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.574 qpair failed and we were unable to recover it. 00:29:07.574 [2024-07-24 20:08:55.311918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.574 [2024-07-24 20:08:55.311951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.574 qpair failed and we were unable to recover it. 00:29:07.574 [2024-07-24 20:08:55.312451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.574 [2024-07-24 20:08:55.312483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.574 qpair failed and we were unable to recover it. 00:29:07.574 [2024-07-24 20:08:55.312984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.575 [2024-07-24 20:08:55.313015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.575 qpair failed and we were unable to recover it. 00:29:07.575 [2024-07-24 20:08:55.313432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.575 [2024-07-24 20:08:55.313488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.575 qpair failed and we were unable to recover it. 00:29:07.575 [2024-07-24 20:08:55.313894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.575 [2024-07-24 20:08:55.313929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.575 qpair failed and we were unable to recover it. 00:29:07.575 [2024-07-24 20:08:55.314427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.575 [2024-07-24 20:08:55.314460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.575 qpair failed and we were unable to recover it. 00:29:07.575 [2024-07-24 20:08:55.314967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.575 [2024-07-24 20:08:55.315000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.575 qpair failed and we were unable to recover it. 00:29:07.575 [2024-07-24 20:08:55.315490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.575 [2024-07-24 20:08:55.315522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.575 qpair failed and we were unable to recover it. 00:29:07.575 [2024-07-24 20:08:55.315962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.575 [2024-07-24 20:08:55.315995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.575 qpair failed and we were unable to recover it. 00:29:07.575 [2024-07-24 20:08:55.316725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.575 [2024-07-24 20:08:55.316831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.575 qpair failed and we were unable to recover it. 00:29:07.575 [2024-07-24 20:08:55.317510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.575 [2024-07-24 20:08:55.317614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.575 qpair failed and we were unable to recover it. 00:29:07.575 [2024-07-24 20:08:55.318136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.575 [2024-07-24 20:08:55.318176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.575 qpair failed and we were unable to recover it. 00:29:07.575 [2024-07-24 20:08:55.318586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.575 [2024-07-24 20:08:55.318621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.575 qpair failed and we were unable to recover it. 00:29:07.575 [2024-07-24 20:08:55.319103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.575 [2024-07-24 20:08:55.319134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.575 qpair failed and we were unable to recover it. 00:29:07.575 [2024-07-24 20:08:55.319599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.575 [2024-07-24 20:08:55.319633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.575 qpair failed and we were unable to recover it. 00:29:07.575 [2024-07-24 20:08:55.320006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.575 [2024-07-24 20:08:55.320049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.575 qpair failed and we were unable to recover it. 00:29:07.575 [2024-07-24 20:08:55.320507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.575 [2024-07-24 20:08:55.320541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.575 qpair failed and we were unable to recover it. 00:29:07.575 [2024-07-24 20:08:55.321014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.575 [2024-07-24 20:08:55.321046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.575 qpair failed and we were unable to recover it. 00:29:07.575 [2024-07-24 20:08:55.321382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.575 [2024-07-24 20:08:55.321414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.575 qpair failed and we were unable to recover it. 00:29:07.575 [2024-07-24 20:08:55.321910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.575 [2024-07-24 20:08:55.321942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.575 qpair failed and we were unable to recover it. 00:29:07.575 [2024-07-24 20:08:55.322404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.575 [2024-07-24 20:08:55.322437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.575 qpair failed and we were unable to recover it. 00:29:07.575 [2024-07-24 20:08:55.322807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.575 [2024-07-24 20:08:55.322839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.575 qpair failed and we were unable to recover it. 00:29:07.575 [2024-07-24 20:08:55.323224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.575 [2024-07-24 20:08:55.323257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.575 qpair failed and we were unable to recover it. 00:29:07.575 [2024-07-24 20:08:55.323639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.575 [2024-07-24 20:08:55.323682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.575 qpair failed and we were unable to recover it. 00:29:07.575 [2024-07-24 20:08:55.324150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.575 [2024-07-24 20:08:55.324182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.575 qpair failed and we were unable to recover it. 00:29:07.575 [2024-07-24 20:08:55.324677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.575 [2024-07-24 20:08:55.324709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.575 qpair failed and we were unable to recover it. 00:29:07.575 [2024-07-24 20:08:55.325094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.575 [2024-07-24 20:08:55.325126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.575 qpair failed and we were unable to recover it. 00:29:07.575 [2024-07-24 20:08:55.325633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.575 [2024-07-24 20:08:55.325666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.575 qpair failed and we were unable to recover it. 00:29:07.575 [2024-07-24 20:08:55.326040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.575 [2024-07-24 20:08:55.326073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.575 qpair failed and we were unable to recover it. 00:29:07.575 [2024-07-24 20:08:55.326347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.575 [2024-07-24 20:08:55.326380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.575 qpair failed and we were unable to recover it. 00:29:07.575 [2024-07-24 20:08:55.326872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.575 [2024-07-24 20:08:55.326906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.575 qpair failed and we were unable to recover it. 00:29:07.575 [2024-07-24 20:08:55.327181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.575 [2024-07-24 20:08:55.327224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.575 qpair failed and we were unable to recover it. 00:29:07.575 [2024-07-24 20:08:55.327702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.575 [2024-07-24 20:08:55.327733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.575 qpair failed and we were unable to recover it. 00:29:07.575 [2024-07-24 20:08:55.328259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.575 [2024-07-24 20:08:55.328292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.575 qpair failed and we were unable to recover it. 00:29:07.575 [2024-07-24 20:08:55.328846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.575 [2024-07-24 20:08:55.328880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.575 qpair failed and we were unable to recover it. 00:29:07.575 [2024-07-24 20:08:55.329284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.575 [2024-07-24 20:08:55.329316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.575 qpair failed and we were unable to recover it. 00:29:07.575 [2024-07-24 20:08:55.329833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.575 [2024-07-24 20:08:55.329872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.575 qpair failed and we were unable to recover it. 00:29:07.575 [2024-07-24 20:08:55.330135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.575 [2024-07-24 20:08:55.330164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.575 qpair failed and we were unable to recover it. 00:29:07.575 [2024-07-24 20:08:55.330636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.575 [2024-07-24 20:08:55.330668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.575 qpair failed and we were unable to recover it. 00:29:07.575 [2024-07-24 20:08:55.331157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.575 [2024-07-24 20:08:55.331188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.576 qpair failed and we were unable to recover it. 00:29:07.576 [2024-07-24 20:08:55.331674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.576 [2024-07-24 20:08:55.331706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.576 qpair failed and we were unable to recover it. 00:29:07.576 [2024-07-24 20:08:55.332189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.576 [2024-07-24 20:08:55.332231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.576 qpair failed and we were unable to recover it. 00:29:07.576 [2024-07-24 20:08:55.332723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.576 [2024-07-24 20:08:55.332755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.576 qpair failed and we were unable to recover it. 00:29:07.576 [2024-07-24 20:08:55.333197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.576 [2024-07-24 20:08:55.333254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.576 qpair failed and we were unable to recover it. 00:29:07.576 [2024-07-24 20:08:55.333763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.576 [2024-07-24 20:08:55.333796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.576 qpair failed and we were unable to recover it. 00:29:07.576 [2024-07-24 20:08:55.334280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.576 [2024-07-24 20:08:55.334314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.576 qpair failed and we were unable to recover it. 00:29:07.576 [2024-07-24 20:08:55.334698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.576 [2024-07-24 20:08:55.334730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.576 qpair failed and we were unable to recover it. 00:29:07.576 [2024-07-24 20:08:55.335266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.576 [2024-07-24 20:08:55.335322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.576 qpair failed and we were unable to recover it. 00:29:07.576 [2024-07-24 20:08:55.335827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.576 [2024-07-24 20:08:55.335858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.576 qpair failed and we were unable to recover it. 00:29:07.576 [2024-07-24 20:08:55.336335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.576 [2024-07-24 20:08:55.336367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.576 qpair failed and we were unable to recover it. 00:29:07.576 [2024-07-24 20:08:55.336873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.576 [2024-07-24 20:08:55.336903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.576 qpair failed and we were unable to recover it. 00:29:07.576 [2024-07-24 20:08:55.337390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.576 [2024-07-24 20:08:55.337421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.576 qpair failed and we were unable to recover it. 00:29:07.576 [2024-07-24 20:08:55.337905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.576 [2024-07-24 20:08:55.337940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.576 qpair failed and we were unable to recover it. 00:29:07.576 [2024-07-24 20:08:55.338409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.576 [2024-07-24 20:08:55.338443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.576 qpair failed and we were unable to recover it. 00:29:07.576 [2024-07-24 20:08:55.338898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.576 [2024-07-24 20:08:55.338928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.576 qpair failed and we were unable to recover it. 00:29:07.576 [2024-07-24 20:08:55.339267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.576 [2024-07-24 20:08:55.339299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.576 qpair failed and we were unable to recover it. 00:29:07.576 [2024-07-24 20:08:55.339799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.576 [2024-07-24 20:08:55.339830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.576 qpair failed and we were unable to recover it. 00:29:07.576 [2024-07-24 20:08:55.340314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.576 [2024-07-24 20:08:55.340347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.576 qpair failed and we were unable to recover it. 00:29:07.576 [2024-07-24 20:08:55.340617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.576 [2024-07-24 20:08:55.340646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.576 qpair failed and we were unable to recover it. 00:29:07.576 [2024-07-24 20:08:55.341112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.576 [2024-07-24 20:08:55.341143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.576 qpair failed and we were unable to recover it. 00:29:07.576 [2024-07-24 20:08:55.341622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.576 [2024-07-24 20:08:55.341654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.576 qpair failed and we were unable to recover it. 00:29:07.576 [2024-07-24 20:08:55.341905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.576 [2024-07-24 20:08:55.341935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.576 qpair failed and we were unable to recover it. 00:29:07.576 [2024-07-24 20:08:55.342304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.576 [2024-07-24 20:08:55.342337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.576 qpair failed and we were unable to recover it. 00:29:07.576 [2024-07-24 20:08:55.342609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.576 [2024-07-24 20:08:55.342639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.576 qpair failed and we were unable to recover it. 00:29:07.576 [2024-07-24 20:08:55.342989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.576 [2024-07-24 20:08:55.343019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.576 qpair failed and we were unable to recover it. 00:29:07.576 [2024-07-24 20:08:55.343502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.576 [2024-07-24 20:08:55.343535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.576 qpair failed and we were unable to recover it. 00:29:07.576 [2024-07-24 20:08:55.344027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.576 [2024-07-24 20:08:55.344059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.576 qpair failed and we were unable to recover it. 00:29:07.576 [2024-07-24 20:08:55.344528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.576 [2024-07-24 20:08:55.344559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.576 qpair failed and we were unable to recover it. 00:29:07.576 [2024-07-24 20:08:55.345017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.576 [2024-07-24 20:08:55.345048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.576 qpair failed and we were unable to recover it. 00:29:07.576 [2024-07-24 20:08:55.345304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.576 [2024-07-24 20:08:55.345334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.576 qpair failed and we were unable to recover it. 00:29:07.576 [2024-07-24 20:08:55.345814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.576 [2024-07-24 20:08:55.345844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.576 qpair failed and we were unable to recover it. 00:29:07.576 [2024-07-24 20:08:55.346319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.576 [2024-07-24 20:08:55.346352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.576 qpair failed and we were unable to recover it. 00:29:07.576 [2024-07-24 20:08:55.346828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.576 [2024-07-24 20:08:55.346860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.576 qpair failed and we were unable to recover it. 00:29:07.576 [2024-07-24 20:08:55.347338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.576 [2024-07-24 20:08:55.347370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.576 qpair failed and we were unable to recover it. 00:29:07.576 [2024-07-24 20:08:55.347860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.576 [2024-07-24 20:08:55.347890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.576 qpair failed and we were unable to recover it. 00:29:07.576 [2024-07-24 20:08:55.348368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.576 [2024-07-24 20:08:55.348401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.576 qpair failed and we were unable to recover it. 00:29:07.576 [2024-07-24 20:08:55.348892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.576 [2024-07-24 20:08:55.348929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.576 qpair failed and we were unable to recover it. 00:29:07.576 [2024-07-24 20:08:55.349348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.577 [2024-07-24 20:08:55.349379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.577 qpair failed and we were unable to recover it. 00:29:07.577 [2024-07-24 20:08:55.349836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.577 [2024-07-24 20:08:55.349867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.577 qpair failed and we were unable to recover it. 00:29:07.577 [2024-07-24 20:08:55.350349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.577 [2024-07-24 20:08:55.350380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.577 qpair failed and we were unable to recover it. 00:29:07.577 [2024-07-24 20:08:55.350879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.577 [2024-07-24 20:08:55.350910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.577 qpair failed and we were unable to recover it. 00:29:07.577 [2024-07-24 20:08:55.351167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.577 [2024-07-24 20:08:55.351196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.577 qpair failed and we were unable to recover it. 00:29:07.577 [2024-07-24 20:08:55.351712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.577 [2024-07-24 20:08:55.351742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.577 qpair failed and we were unable to recover it. 00:29:07.577 [2024-07-24 20:08:55.352232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.577 [2024-07-24 20:08:55.352264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.577 qpair failed and we were unable to recover it. 00:29:07.577 [2024-07-24 20:08:55.352768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.577 [2024-07-24 20:08:55.352798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.577 qpair failed and we were unable to recover it. 00:29:07.577 [2024-07-24 20:08:55.353076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.577 [2024-07-24 20:08:55.353105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.577 qpair failed and we were unable to recover it. 00:29:07.577 [2024-07-24 20:08:55.353586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.577 [2024-07-24 20:08:55.353619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.577 qpair failed and we were unable to recover it. 00:29:07.577 [2024-07-24 20:08:55.354090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.577 [2024-07-24 20:08:55.354121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.577 qpair failed and we were unable to recover it. 00:29:07.577 [2024-07-24 20:08:55.354490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.577 [2024-07-24 20:08:55.354524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.577 qpair failed and we were unable to recover it. 00:29:07.577 [2024-07-24 20:08:55.355004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.577 [2024-07-24 20:08:55.355034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.577 qpair failed and we were unable to recover it. 00:29:07.577 [2024-07-24 20:08:55.355511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.577 [2024-07-24 20:08:55.355544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.577 qpair failed and we were unable to recover it. 00:29:07.577 [2024-07-24 20:08:55.355936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.577 [2024-07-24 20:08:55.355967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.577 qpair failed and we were unable to recover it. 00:29:07.577 [2024-07-24 20:08:55.356328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.577 [2024-07-24 20:08:55.356360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.577 qpair failed and we were unable to recover it. 00:29:07.577 [2024-07-24 20:08:55.356847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.577 [2024-07-24 20:08:55.356877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.577 qpair failed and we were unable to recover it. 00:29:07.577 [2024-07-24 20:08:55.357392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.577 [2024-07-24 20:08:55.357423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.577 qpair failed and we were unable to recover it. 00:29:07.577 [2024-07-24 20:08:55.357913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.577 [2024-07-24 20:08:55.357944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.577 qpair failed and we were unable to recover it. 00:29:07.577 [2024-07-24 20:08:55.358419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.577 [2024-07-24 20:08:55.358450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.577 qpair failed and we were unable to recover it. 00:29:07.577 [2024-07-24 20:08:55.358934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.577 [2024-07-24 20:08:55.358964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.577 qpair failed and we were unable to recover it. 00:29:07.577 [2024-07-24 20:08:55.359478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.577 [2024-07-24 20:08:55.359509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.577 qpair failed and we were unable to recover it. 00:29:07.577 [2024-07-24 20:08:55.360051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.577 [2024-07-24 20:08:55.360082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.577 qpair failed and we were unable to recover it. 00:29:07.577 [2024-07-24 20:08:55.360456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.577 [2024-07-24 20:08:55.360486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.577 qpair failed and we were unable to recover it. 00:29:07.577 [2024-07-24 20:08:55.360969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.577 [2024-07-24 20:08:55.361001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.577 qpair failed and we were unable to recover it. 00:29:07.577 [2024-07-24 20:08:55.361469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.577 [2024-07-24 20:08:55.361499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.577 qpair failed and we were unable to recover it. 00:29:07.577 [2024-07-24 20:08:55.361986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.577 [2024-07-24 20:08:55.362017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.577 qpair failed and we were unable to recover it. 00:29:07.577 [2024-07-24 20:08:55.362624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.577 [2024-07-24 20:08:55.362731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.577 qpair failed and we were unable to recover it. 00:29:07.577 [2024-07-24 20:08:55.363105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.577 [2024-07-24 20:08:55.363144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.577 qpair failed and we were unable to recover it. 00:29:07.577 [2024-07-24 20:08:55.363659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.577 [2024-07-24 20:08:55.363694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.577 qpair failed and we were unable to recover it. 00:29:07.577 [2024-07-24 20:08:55.364095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.577 [2024-07-24 20:08:55.364126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.577 qpair failed and we were unable to recover it. 00:29:07.577 [2024-07-24 20:08:55.364376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.577 [2024-07-24 20:08:55.364406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.577 qpair failed and we were unable to recover it. 00:29:07.577 [2024-07-24 20:08:55.364879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.577 [2024-07-24 20:08:55.364909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.577 qpair failed and we were unable to recover it. 00:29:07.577 [2024-07-24 20:08:55.365467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.577 [2024-07-24 20:08:55.365499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.577 qpair failed and we were unable to recover it. 00:29:07.577 [2024-07-24 20:08:55.365787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.577 [2024-07-24 20:08:55.365816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.577 qpair failed and we were unable to recover it. 00:29:07.577 [2024-07-24 20:08:55.366277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.577 [2024-07-24 20:08:55.366308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.577 qpair failed and we were unable to recover it. 00:29:07.577 [2024-07-24 20:08:55.366805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.577 [2024-07-24 20:08:55.366835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.577 qpair failed and we were unable to recover it. 00:29:07.577 [2024-07-24 20:08:55.367300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.577 [2024-07-24 20:08:55.367333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.577 qpair failed and we were unable to recover it. 00:29:07.577 [2024-07-24 20:08:55.367819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.578 [2024-07-24 20:08:55.367849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.578 qpair failed and we were unable to recover it. 00:29:07.578 [2024-07-24 20:08:55.368351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.578 [2024-07-24 20:08:55.368400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.578 qpair failed and we were unable to recover it. 00:29:07.578 [2024-07-24 20:08:55.368872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.578 [2024-07-24 20:08:55.368903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.578 qpair failed and we were unable to recover it. 00:29:07.578 [2024-07-24 20:08:55.369286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.578 [2024-07-24 20:08:55.369329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.578 qpair failed and we were unable to recover it. 00:29:07.578 [2024-07-24 20:08:55.369611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.578 [2024-07-24 20:08:55.369641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.578 qpair failed and we were unable to recover it. 00:29:07.578 [2024-07-24 20:08:55.370168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.578 [2024-07-24 20:08:55.370198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.578 qpair failed and we were unable to recover it. 00:29:07.578 [2024-07-24 20:08:55.370724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.578 [2024-07-24 20:08:55.370756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.578 qpair failed and we were unable to recover it. 00:29:07.578 [2024-07-24 20:08:55.371283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.578 [2024-07-24 20:08:55.371316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.578 qpair failed and we were unable to recover it. 00:29:07.578 [2024-07-24 20:08:55.371818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.578 [2024-07-24 20:08:55.371849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.578 qpair failed and we were unable to recover it. 00:29:07.578 [2024-07-24 20:08:55.372221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.578 [2024-07-24 20:08:55.372258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.578 qpair failed and we were unable to recover it. 00:29:07.578 [2024-07-24 20:08:55.372618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.578 [2024-07-24 20:08:55.372654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.578 qpair failed and we were unable to recover it. 00:29:07.578 [2024-07-24 20:08:55.373145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.578 [2024-07-24 20:08:55.373176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.578 qpair failed and we were unable to recover it. 00:29:07.578 [2024-07-24 20:08:55.373656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.578 [2024-07-24 20:08:55.373687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.578 qpair failed and we were unable to recover it. 00:29:07.578 [2024-07-24 20:08:55.374063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.578 [2024-07-24 20:08:55.374096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.578 qpair failed and we were unable to recover it. 00:29:07.578 [2024-07-24 20:08:55.374577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.578 [2024-07-24 20:08:55.374608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.578 qpair failed and we were unable to recover it. 00:29:07.578 [2024-07-24 20:08:55.375107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.578 [2024-07-24 20:08:55.375138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.578 qpair failed and we were unable to recover it. 00:29:07.578 [2024-07-24 20:08:55.375534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.578 [2024-07-24 20:08:55.375570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.578 qpair failed and we were unable to recover it. 00:29:07.578 [2024-07-24 20:08:55.376055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.578 [2024-07-24 20:08:55.376088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.578 qpair failed and we were unable to recover it. 00:29:07.578 [2024-07-24 20:08:55.376348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.578 [2024-07-24 20:08:55.376379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.578 qpair failed and we were unable to recover it. 00:29:07.578 [2024-07-24 20:08:55.376754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.578 [2024-07-24 20:08:55.376788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.578 qpair failed and we were unable to recover it. 00:29:07.578 [2024-07-24 20:08:55.377266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.578 [2024-07-24 20:08:55.377297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.578 qpair failed and we were unable to recover it. 00:29:07.578 [2024-07-24 20:08:55.377761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.578 [2024-07-24 20:08:55.377791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.578 qpair failed and we were unable to recover it. 00:29:07.578 [2024-07-24 20:08:55.378277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.578 [2024-07-24 20:08:55.378308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.578 qpair failed and we were unable to recover it. 00:29:07.578 [2024-07-24 20:08:55.378824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.578 [2024-07-24 20:08:55.378855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.578 qpair failed and we were unable to recover it. 00:29:07.578 [2024-07-24 20:08:55.379192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.578 [2024-07-24 20:08:55.379237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.578 qpair failed and we were unable to recover it. 00:29:07.578 [2024-07-24 20:08:55.379715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.578 [2024-07-24 20:08:55.379746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.578 qpair failed and we were unable to recover it. 00:29:07.578 [2024-07-24 20:08:55.380031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.578 [2024-07-24 20:08:55.380061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.578 qpair failed and we were unable to recover it. 00:29:07.578 [2024-07-24 20:08:55.380496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.578 [2024-07-24 20:08:55.380528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.578 qpair failed and we were unable to recover it. 00:29:07.578 [2024-07-24 20:08:55.381029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.578 [2024-07-24 20:08:55.381059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.578 qpair failed and we were unable to recover it. 00:29:07.578 [2024-07-24 20:08:55.381498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.578 [2024-07-24 20:08:55.381529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.578 qpair failed and we were unable to recover it. 00:29:07.578 [2024-07-24 20:08:55.382007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.578 [2024-07-24 20:08:55.382038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.578 qpair failed and we were unable to recover it. 00:29:07.578 [2024-07-24 20:08:55.382540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.578 [2024-07-24 20:08:55.382572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.578 qpair failed and we were unable to recover it. 00:29:07.578 [2024-07-24 20:08:55.382824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.578 [2024-07-24 20:08:55.382854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.578 qpair failed and we were unable to recover it. 00:29:07.578 [2024-07-24 20:08:55.383386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.578 [2024-07-24 20:08:55.383418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.578 qpair failed and we were unable to recover it. 00:29:07.578 [2024-07-24 20:08:55.383901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.578 [2024-07-24 20:08:55.383932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.578 qpair failed and we were unable to recover it. 00:29:07.578 [2024-07-24 20:08:55.384404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.578 [2024-07-24 20:08:55.384438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.578 qpair failed and we were unable to recover it. 00:29:07.578 [2024-07-24 20:08:55.384923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.578 [2024-07-24 20:08:55.384953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.578 qpair failed and we were unable to recover it. 00:29:07.579 20:08:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:07.579 20:08:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:07.579 [2024-07-24 20:08:55.385451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.579 [2024-07-24 20:08:55.385485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.579 qpair failed and we were unable to recover it. 00:29:07.579 20:08:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:07.579 20:08:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:07.579 [2024-07-24 20:08:55.386038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.579 [2024-07-24 20:08:55.386071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.579 qpair failed and we were unable to recover it. 00:29:07.579 20:08:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.579 [2024-07-24 20:08:55.386555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.579 [2024-07-24 20:08:55.386594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.579 qpair failed and we were unable to recover it. 00:29:07.579 [2024-07-24 20:08:55.387121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.579 [2024-07-24 20:08:55.387153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.579 qpair failed and we were unable to recover it. 00:29:07.579 [2024-07-24 20:08:55.387686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.579 [2024-07-24 20:08:55.387717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.579 qpair failed and we were unable to recover it. 00:29:07.579 [2024-07-24 20:08:55.388405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.579 [2024-07-24 20:08:55.388512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.579 qpair failed and we were unable to recover it. 00:29:07.579 [2024-07-24 20:08:55.389042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.579 [2024-07-24 20:08:55.389080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.579 qpair failed and we were unable to recover it. 00:29:07.579 [2024-07-24 20:08:55.389556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.579 [2024-07-24 20:08:55.389592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.579 qpair failed and we were unable to recover it. 00:29:07.579 [2024-07-24 20:08:55.390084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.579 [2024-07-24 20:08:55.390117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.579 qpair failed and we were unable to recover it. 00:29:07.579 [2024-07-24 20:08:55.390333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.579 [2024-07-24 20:08:55.390364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.579 qpair failed and we were unable to recover it. 00:29:07.579 [2024-07-24 20:08:55.390849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.579 [2024-07-24 20:08:55.390880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.579 qpair failed and we were unable to recover it. 00:29:07.579 [2024-07-24 20:08:55.391308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.579 [2024-07-24 20:08:55.391340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.579 qpair failed and we were unable to recover it. 00:29:07.579 [2024-07-24 20:08:55.391889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.579 [2024-07-24 20:08:55.391920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.579 qpair failed and we were unable to recover it. 00:29:07.579 [2024-07-24 20:08:55.392401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.579 [2024-07-24 20:08:55.392433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.579 qpair failed and we were unable to recover it. 00:29:07.579 [2024-07-24 20:08:55.392951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.579 [2024-07-24 20:08:55.392982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.579 qpair failed and we were unable to recover it. 00:29:07.579 [2024-07-24 20:08:55.393357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.579 [2024-07-24 20:08:55.393391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.579 qpair failed and we were unable to recover it. 00:29:07.579 [2024-07-24 20:08:55.393602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.579 [2024-07-24 20:08:55.393633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.579 qpair failed and we were unable to recover it. 00:29:07.579 [2024-07-24 20:08:55.394134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.579 [2024-07-24 20:08:55.394167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.579 qpair failed and we were unable to recover it. 00:29:07.579 [2024-07-24 20:08:55.394689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.579 [2024-07-24 20:08:55.394722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.579 qpair failed and we were unable to recover it. 00:29:07.579 [2024-07-24 20:08:55.395213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.579 [2024-07-24 20:08:55.395247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.579 qpair failed and we were unable to recover it. 00:29:07.579 [2024-07-24 20:08:55.395754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.579 [2024-07-24 20:08:55.395784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.579 qpair failed and we were unable to recover it. 00:29:07.579 [2024-07-24 20:08:55.396064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.579 [2024-07-24 20:08:55.396093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.579 qpair failed and we were unable to recover it. 00:29:07.579 [2024-07-24 20:08:55.396587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.579 [2024-07-24 20:08:55.396619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.579 qpair failed and we were unable to recover it. 00:29:07.579 [2024-07-24 20:08:55.397111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.579 [2024-07-24 20:08:55.397142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.579 qpair failed and we were unable to recover it. 00:29:07.579 [2024-07-24 20:08:55.397680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.579 [2024-07-24 20:08:55.397711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.579 qpair failed and we were unable to recover it. 00:29:07.579 [2024-07-24 20:08:55.397968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.579 [2024-07-24 20:08:55.397998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.579 qpair failed and we were unable to recover it. 00:29:07.579 [2024-07-24 20:08:55.398272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.579 [2024-07-24 20:08:55.398303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.579 qpair failed and we were unable to recover it. 00:29:07.579 [2024-07-24 20:08:55.398449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.579 [2024-07-24 20:08:55.398477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.579 qpair failed and we were unable to recover it. 00:29:07.579 [2024-07-24 20:08:55.398734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.579 [2024-07-24 20:08:55.398768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.579 qpair failed and we were unable to recover it. 00:29:07.579 [2024-07-24 20:08:55.399057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.580 [2024-07-24 20:08:55.399091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.580 qpair failed and we were unable to recover it. 00:29:07.580 [2024-07-24 20:08:55.399561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.580 [2024-07-24 20:08:55.399592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.580 qpair failed and we were unable to recover it. 00:29:07.580 [2024-07-24 20:08:55.400082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.580 [2024-07-24 20:08:55.400112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.580 qpair failed and we were unable to recover it. 00:29:07.580 [2024-07-24 20:08:55.400585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.580 [2024-07-24 20:08:55.400618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.580 qpair failed and we were unable to recover it. 00:29:07.580 [2024-07-24 20:08:55.400877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.580 [2024-07-24 20:08:55.400906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.580 qpair failed and we were unable to recover it. 00:29:07.580 [2024-07-24 20:08:55.401177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.580 [2024-07-24 20:08:55.401227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.580 qpair failed and we were unable to recover it. 00:29:07.580 [2024-07-24 20:08:55.401636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.580 [2024-07-24 20:08:55.401666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.580 qpair failed and we were unable to recover it. 00:29:07.580 [2024-07-24 20:08:55.402163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.580 [2024-07-24 20:08:55.402193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.580 qpair failed and we were unable to recover it. 00:29:07.580 [2024-07-24 20:08:55.402689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.580 [2024-07-24 20:08:55.402720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.580 qpair failed and we were unable to recover it. 00:29:07.580 [2024-07-24 20:08:55.402995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.580 [2024-07-24 20:08:55.403024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.580 qpair failed and we were unable to recover it. 00:29:07.580 [2024-07-24 20:08:55.403504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.580 [2024-07-24 20:08:55.403538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.580 qpair failed and we were unable to recover it. 00:29:07.580 [2024-07-24 20:08:55.404031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.580 [2024-07-24 20:08:55.404062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.580 qpair failed and we were unable to recover it. 00:29:07.580 [2024-07-24 20:08:55.404341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.580 [2024-07-24 20:08:55.404374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.580 qpair failed and we were unable to recover it. 00:29:07.580 [2024-07-24 20:08:55.404652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.580 [2024-07-24 20:08:55.404690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.580 qpair failed and we were unable to recover it. 00:29:07.580 [2024-07-24 20:08:55.405175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.580 [2024-07-24 20:08:55.405212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.580 qpair failed and we were unable to recover it. 00:29:07.580 [2024-07-24 20:08:55.405682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.580 [2024-07-24 20:08:55.405713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.580 qpair failed and we were unable to recover it. 00:29:07.580 [2024-07-24 20:08:55.406199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.580 [2024-07-24 20:08:55.406239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.580 qpair failed and we were unable to recover it. 00:29:07.580 [2024-07-24 20:08:55.406770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.580 [2024-07-24 20:08:55.406800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.580 qpair failed and we were unable to recover it. 00:29:07.580 [2024-07-24 20:08:55.407459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.580 [2024-07-24 20:08:55.407565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.580 qpair failed and we were unable to recover it. 00:29:07.580 [2024-07-24 20:08:55.408167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.580 [2024-07-24 20:08:55.408222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.580 qpair failed and we were unable to recover it. 00:29:07.580 [2024-07-24 20:08:55.408774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.580 [2024-07-24 20:08:55.408806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.580 qpair failed and we were unable to recover it. 00:29:07.580 [2024-07-24 20:08:55.409488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.580 [2024-07-24 20:08:55.409594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.580 qpair failed and we were unable to recover it. 00:29:07.580 [2024-07-24 20:08:55.410119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.580 [2024-07-24 20:08:55.410157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.580 qpair failed and we were unable to recover it. 00:29:07.580 [2024-07-24 20:08:55.410540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.580 [2024-07-24 20:08:55.410574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.580 qpair failed and we were unable to recover it. 00:29:07.580 [2024-07-24 20:08:55.411053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.580 [2024-07-24 20:08:55.411085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.580 qpair failed and we were unable to recover it. 00:29:07.580 [2024-07-24 20:08:55.411594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.580 [2024-07-24 20:08:55.411626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.580 qpair failed and we were unable to recover it. 00:29:07.580 [2024-07-24 20:08:55.412154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.580 [2024-07-24 20:08:55.412186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.580 qpair failed and we were unable to recover it. 00:29:07.580 [2024-07-24 20:08:55.412706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.580 [2024-07-24 20:08:55.412737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.580 qpair failed and we were unable to recover it. 00:29:07.580 [2024-07-24 20:08:55.413228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.580 [2024-07-24 20:08:55.413261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.580 qpair failed and we were unable to recover it. 00:29:07.580 [2024-07-24 20:08:55.413759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.580 [2024-07-24 20:08:55.413791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.580 qpair failed and we were unable to recover it. 00:29:07.580 [2024-07-24 20:08:55.414403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.580 [2024-07-24 20:08:55.414512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.580 qpair failed and we were unable to recover it. 00:29:07.580 [2024-07-24 20:08:55.414879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.580 [2024-07-24 20:08:55.414918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.580 qpair failed and we were unable to recover it. 00:29:07.580 [2024-07-24 20:08:55.415413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.580 [2024-07-24 20:08:55.415447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.580 qpair failed and we were unable to recover it. 00:29:07.580 [2024-07-24 20:08:55.415936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.580 [2024-07-24 20:08:55.415968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.580 qpair failed and we were unable to recover it. 00:29:07.580 [2024-07-24 20:08:55.416547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.580 [2024-07-24 20:08:55.416653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.581 qpair failed and we were unable to recover it. 00:29:07.581 [2024-07-24 20:08:55.417014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.581 [2024-07-24 20:08:55.417052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.581 qpair failed and we were unable to recover it. 00:29:07.581 [2024-07-24 20:08:55.417471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.581 [2024-07-24 20:08:55.417507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.581 qpair failed and we were unable to recover it. 00:29:07.581 [2024-07-24 20:08:55.417992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.581 [2024-07-24 20:08:55.418025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.581 qpair failed and we were unable to recover it. 00:29:07.581 [2024-07-24 20:08:55.418470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.581 [2024-07-24 20:08:55.418502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.581 qpair failed and we were unable to recover it. 00:29:07.581 [2024-07-24 20:08:55.419007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.581 [2024-07-24 20:08:55.419038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.581 qpair failed and we were unable to recover it. 00:29:07.581 [2024-07-24 20:08:55.419430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.581 [2024-07-24 20:08:55.419464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.581 qpair failed and we were unable to recover it. 00:29:07.581 [2024-07-24 20:08:55.419709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.581 [2024-07-24 20:08:55.419739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.581 qpair failed and we were unable to recover it. 00:29:07.581 [2024-07-24 20:08:55.420226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.581 [2024-07-24 20:08:55.420259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.581 qpair failed and we were unable to recover it. 00:29:07.581 [2024-07-24 20:08:55.420766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.581 [2024-07-24 20:08:55.420799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.581 qpair failed and we were unable to recover it. 00:29:07.581 [2024-07-24 20:08:55.421281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.581 [2024-07-24 20:08:55.421315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.581 qpair failed and we were unable to recover it. 00:29:07.581 [2024-07-24 20:08:55.421783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.581 [2024-07-24 20:08:55.421814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.581 qpair failed and we were unable to recover it. 00:29:07.581 [2024-07-24 20:08:55.422091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.581 [2024-07-24 20:08:55.422121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.581 qpair failed and we were unable to recover it. 00:29:07.581 [2024-07-24 20:08:55.422536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.581 [2024-07-24 20:08:55.422567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.581 qpair failed and we were unable to recover it. 00:29:07.581 [2024-07-24 20:08:55.422941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.581 [2024-07-24 20:08:55.422986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.581 qpair failed and we were unable to recover it. 00:29:07.581 [2024-07-24 20:08:55.423457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.581 [2024-07-24 20:08:55.423489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.581 qpair failed and we were unable to recover it. 00:29:07.581 [2024-07-24 20:08:55.423973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.581 [2024-07-24 20:08:55.424005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.581 qpair failed and we were unable to recover it. 00:29:07.581 [2024-07-24 20:08:55.424483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.581 [2024-07-24 20:08:55.424516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.581 qpair failed and we were unable to recover it. 00:29:07.581 [2024-07-24 20:08:55.424997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.581 [2024-07-24 20:08:55.425028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.581 qpair failed and we were unable to recover it. 00:29:07.581 [2024-07-24 20:08:55.425390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.581 [2024-07-24 20:08:55.425437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.581 qpair failed and we were unable to recover it. 00:29:07.581 [2024-07-24 20:08:55.425950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.581 [2024-07-24 20:08:55.425982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.581 qpair failed and we were unable to recover it. 00:29:07.581 [2024-07-24 20:08:55.426458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.581 [2024-07-24 20:08:55.426492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.581 qpair failed and we were unable to recover it. 00:29:07.581 20:08:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:07.581 [2024-07-24 20:08:55.426855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.581 [2024-07-24 20:08:55.426892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.581 qpair failed and we were unable to recover it. 00:29:07.581 20:08:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:07.581 [2024-07-24 20:08:55.427196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.581 [2024-07-24 20:08:55.427241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.581 qpair failed and we were unable to recover it. 00:29:07.581 20:08:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.581 20:08:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.581 [2024-07-24 20:08:55.427755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.581 [2024-07-24 20:08:55.427790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.581 qpair failed and we were unable to recover it. 00:29:07.581 [2024-07-24 20:08:55.428456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.581 [2024-07-24 20:08:55.428561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.581 qpair failed and we were unable to recover it. 00:29:07.581 [2024-07-24 20:08:55.429039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.581 [2024-07-24 20:08:55.429084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.581 qpair failed and we were unable to recover it. 00:29:07.581 [2024-07-24 20:08:55.429581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.581 [2024-07-24 20:08:55.429618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.581 qpair failed and we were unable to recover it. 00:29:07.581 [2024-07-24 20:08:55.430098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.581 [2024-07-24 20:08:55.430130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.581 qpair failed and we were unable to recover it. 00:29:07.581 [2024-07-24 20:08:55.430594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.581 [2024-07-24 20:08:55.430625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.581 qpair failed and we were unable to recover it. 00:29:07.581 [2024-07-24 20:08:55.430888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.581 [2024-07-24 20:08:55.430919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.581 qpair failed and we were unable to recover it. 00:29:07.581 [2024-07-24 20:08:55.431425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.581 [2024-07-24 20:08:55.431457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.581 qpair failed and we were unable to recover it. 00:29:07.581 [2024-07-24 20:08:55.431823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.581 [2024-07-24 20:08:55.431854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.581 qpair failed and we were unable to recover it. 00:29:07.581 [2024-07-24 20:08:55.432350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.581 [2024-07-24 20:08:55.432383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.581 qpair failed and we were unable to recover it. 00:29:07.581 [2024-07-24 20:08:55.432639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-24 20:08:55.432670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.582 qpair failed and we were unable to recover it. 00:29:07.582 [2024-07-24 20:08:55.433165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-24 20:08:55.433196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.582 qpair failed and we were unable to recover it. 00:29:07.582 [2024-07-24 20:08:55.433715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-24 20:08:55.433747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.582 qpair failed and we were unable to recover it. 00:29:07.582 [2024-07-24 20:08:55.434297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-24 20:08:55.434329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.582 qpair failed and we were unable to recover it. 00:29:07.582 [2024-07-24 20:08:55.434820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-24 20:08:55.434851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.582 qpair failed and we were unable to recover it. 00:29:07.582 [2024-07-24 20:08:55.435307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-24 20:08:55.435338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.582 qpair failed and we were unable to recover it. 00:29:07.582 [2024-07-24 20:08:55.435831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-24 20:08:55.435861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.582 qpair failed and we were unable to recover it. 00:29:07.582 [2024-07-24 20:08:55.436261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-24 20:08:55.436294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.582 qpair failed and we were unable to recover it. 00:29:07.582 [2024-07-24 20:08:55.436614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-24 20:08:55.436649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.582 qpair failed and we were unable to recover it. 00:29:07.582 [2024-07-24 20:08:55.437179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-24 20:08:55.437217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.582 qpair failed and we were unable to recover it. 00:29:07.582 [2024-07-24 20:08:55.437740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-24 20:08:55.437771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.582 qpair failed and we were unable to recover it. 00:29:07.582 [2024-07-24 20:08:55.438272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-24 20:08:55.438305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.582 qpair failed and we were unable to recover it. 00:29:07.582 [2024-07-24 20:08:55.438787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-24 20:08:55.438818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.582 qpair failed and we were unable to recover it. 00:29:07.582 [2024-07-24 20:08:55.439354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-24 20:08:55.439385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.582 qpair failed and we were unable to recover it. 00:29:07.582 [2024-07-24 20:08:55.439858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-24 20:08:55.439889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.582 qpair failed and we were unable to recover it. 00:29:07.582 [2024-07-24 20:08:55.440316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-24 20:08:55.440350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.582 qpair failed and we were unable to recover it. 00:29:07.582 [2024-07-24 20:08:55.440597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-24 20:08:55.440627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.582 qpair failed and we were unable to recover it. 00:29:07.582 [2024-07-24 20:08:55.441083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-24 20:08:55.441113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.582 qpair failed and we were unable to recover it. 00:29:07.582 [2024-07-24 20:08:55.441609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-24 20:08:55.441641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.582 qpair failed and we were unable to recover it. 00:29:07.582 [2024-07-24 20:08:55.442137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-24 20:08:55.442168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.582 qpair failed and we were unable to recover it. 00:29:07.582 [2024-07-24 20:08:55.442625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-24 20:08:55.442658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.582 qpair failed and we were unable to recover it. 00:29:07.582 [2024-07-24 20:08:55.443023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-24 20:08:55.443054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.582 qpair failed and we were unable to recover it. 00:29:07.582 [2024-07-24 20:08:55.443524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-24 20:08:55.443557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.582 qpair failed and we were unable to recover it. 00:29:07.582 [2024-07-24 20:08:55.444012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-24 20:08:55.444049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.582 qpair failed and we were unable to recover it. 00:29:07.582 [2024-07-24 20:08:55.444549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-24 20:08:55.444582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.582 qpair failed and we were unable to recover it. 00:29:07.582 [2024-07-24 20:08:55.445111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-24 20:08:55.445142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.582 qpair failed and we were unable to recover it. 00:29:07.582 [2024-07-24 20:08:55.445579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-24 20:08:55.445611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.582 qpair failed and we were unable to recover it. 00:29:07.582 [2024-07-24 20:08:55.445976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-24 20:08:55.446009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.582 qpair failed and we were unable to recover it. 00:29:07.582 [2024-07-24 20:08:55.446482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-24 20:08:55.446516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.582 qpair failed and we were unable to recover it. 00:29:07.582 [2024-07-24 20:08:55.446965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-24 20:08:55.446997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.582 qpair failed and we were unable to recover it. 00:29:07.582 [2024-07-24 20:08:55.447478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-24 20:08:55.447511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.582 qpair failed and we were unable to recover it. 00:29:07.582 [2024-07-24 20:08:55.448015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-24 20:08:55.448047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.582 qpair failed and we were unable to recover it. 00:29:07.582 [2024-07-24 20:08:55.448396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-24 20:08:55.448429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.582 qpair failed and we were unable to recover it. 00:29:07.582 [2024-07-24 20:08:55.448924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-24 20:08:55.448954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.582 qpair failed and we were unable to recover it. 00:29:07.582 [2024-07-24 20:08:55.449437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.582 [2024-07-24 20:08:55.449469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.582 qpair failed and we were unable to recover it. 00:29:07.583 [2024-07-24 20:08:55.449966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.583 [2024-07-24 20:08:55.449996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.583 qpair failed and we were unable to recover it. 00:29:07.583 [2024-07-24 20:08:55.450329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.583 [2024-07-24 20:08:55.450361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.583 qpair failed and we were unable to recover it. 00:29:07.583 [2024-07-24 20:08:55.450650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.583 [2024-07-24 20:08:55.450682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.583 qpair failed and we were unable to recover it. 00:29:07.583 [2024-07-24 20:08:55.450968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.583 [2024-07-24 20:08:55.450998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.583 qpair failed and we were unable to recover it. 00:29:07.583 [2024-07-24 20:08:55.451440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.583 [2024-07-24 20:08:55.451473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.583 qpair failed and we were unable to recover it. 00:29:07.583 [2024-07-24 20:08:55.451966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.583 [2024-07-24 20:08:55.451997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.583 qpair failed and we were unable to recover it. 00:29:07.583 Malloc0 00:29:07.583 [2024-07-24 20:08:55.452466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.583 [2024-07-24 20:08:55.452499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.583 qpair failed and we were unable to recover it. 00:29:07.583 [2024-07-24 20:08:55.452880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.583 [2024-07-24 20:08:55.452911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.583 qpair failed and we were unable to recover it. 00:29:07.583 20:08:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.583 20:08:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:07.583 [2024-07-24 20:08:55.453409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.583 [2024-07-24 20:08:55.453442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.583 qpair failed and we were unable to recover it. 00:29:07.583 20:08:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.583 [2024-07-24 20:08:55.453926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.583 20:08:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.583 [2024-07-24 20:08:55.453959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.583 qpair failed and we were unable to recover it. 00:29:07.583 [2024-07-24 20:08:55.454495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.583 [2024-07-24 20:08:55.454527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.583 qpair failed and we were unable to recover it. 00:29:07.583 [2024-07-24 20:08:55.455004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.583 [2024-07-24 20:08:55.455035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.583 qpair failed and we were unable to recover it. 00:29:07.583 [2024-07-24 20:08:55.455622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.583 [2024-07-24 20:08:55.455728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.583 qpair failed and we were unable to recover it. 00:29:07.583 [2024-07-24 20:08:55.456485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.583 [2024-07-24 20:08:55.456605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.583 qpair failed and we were unable to recover it. 00:29:07.583 [2024-07-24 20:08:55.456994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.583 [2024-07-24 20:08:55.457031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.583 qpair failed and we were unable to recover it. 00:29:07.583 [2024-07-24 20:08:55.457551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.583 [2024-07-24 20:08:55.457586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.583 qpair failed and we were unable to recover it. 00:29:07.583 [2024-07-24 20:08:55.458099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.583 [2024-07-24 20:08:55.458135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.583 qpair failed and we were unable to recover it. 00:29:07.583 [2024-07-24 20:08:55.458624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.583 [2024-07-24 20:08:55.458658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.583 qpair failed and we were unable to recover it. 00:29:07.583 [2024-07-24 20:08:55.459115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.583 [2024-07-24 20:08:55.459146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.583 qpair failed and we were unable to recover it. 00:29:07.583 [2024-07-24 20:08:55.459457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.583 [2024-07-24 20:08:55.459492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.583 qpair failed and we were unable to recover it. 00:29:07.583 [2024-07-24 20:08:55.459543] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:07.583 [2024-07-24 20:08:55.459983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.583 [2024-07-24 20:08:55.460015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.583 qpair failed and we were unable to recover it. 00:29:07.583 [2024-07-24 20:08:55.460608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.583 [2024-07-24 20:08:55.460714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.583 qpair failed and we were unable to recover it. 00:29:07.583 [2024-07-24 20:08:55.461450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.583 [2024-07-24 20:08:55.461557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.583 qpair failed and we were unable to recover it. 00:29:07.583 [2024-07-24 20:08:55.462115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.583 [2024-07-24 20:08:55.462153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.583 qpair failed and we were unable to recover it. 00:29:07.583 [2024-07-24 20:08:55.462705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.583 [2024-07-24 20:08:55.462739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.583 qpair failed and we were unable to recover it. 00:29:07.583 [2024-07-24 20:08:55.463266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.583 [2024-07-24 20:08:55.463325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.583 qpair failed and we were unable to recover it. 00:29:07.583 [2024-07-24 20:08:55.463845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.583 [2024-07-24 20:08:55.463897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.583 qpair failed and we were unable to recover it. 00:29:07.583 [2024-07-24 20:08:55.464374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.583 [2024-07-24 20:08:55.464408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.583 qpair failed and we were unable to recover it. 00:29:07.584 [2024-07-24 20:08:55.464898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.584 [2024-07-24 20:08:55.464930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.584 qpair failed and we were unable to recover it. 00:29:07.584 [2024-07-24 20:08:55.465303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.584 [2024-07-24 20:08:55.465336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.584 qpair failed and we were unable to recover it. 00:29:07.584 [2024-07-24 20:08:55.465832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.584 [2024-07-24 20:08:55.465864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.584 qpair failed and we were unable to recover it. 00:29:07.584 [2024-07-24 20:08:55.466348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.584 [2024-07-24 20:08:55.466382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.584 qpair failed and we were unable to recover it. 00:29:07.584 [2024-07-24 20:08:55.466807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.584 [2024-07-24 20:08:55.466839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.584 qpair failed and we were unable to recover it. 00:29:07.584 [2024-07-24 20:08:55.467312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.584 [2024-07-24 20:08:55.467344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.584 qpair failed and we were unable to recover it. 00:29:07.584 [2024-07-24 20:08:55.467819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.584 [2024-07-24 20:08:55.467851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.584 qpair failed and we were unable to recover it. 00:29:07.584 [2024-07-24 20:08:55.468224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.584 [2024-07-24 20:08:55.468257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.584 qpair failed and we were unable to recover it. 00:29:07.584 20:08:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.584 [2024-07-24 20:08:55.468766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.584 [2024-07-24 20:08:55.468796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.584 qpair failed and we were unable to recover it. 00:29:07.584 20:08:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:07.584 20:08:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.584 [2024-07-24 20:08:55.469449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.584 20:08:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.584 [2024-07-24 20:08:55.469553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.584 qpair failed and we were unable to recover it. 00:29:07.584 [2024-07-24 20:08:55.470089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.584 [2024-07-24 20:08:55.470128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.584 qpair failed and we were unable to recover it. 00:29:07.584 [2024-07-24 20:08:55.470609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.584 [2024-07-24 20:08:55.470644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.584 qpair failed and we were unable to recover it. 00:29:07.584 [2024-07-24 20:08:55.471142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.584 [2024-07-24 20:08:55.471174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.584 qpair failed and we were unable to recover it. 00:29:07.584 [2024-07-24 20:08:55.471610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.584 [2024-07-24 20:08:55.471642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.584 qpair failed and we were unable to recover it. 00:29:07.584 [2024-07-24 20:08:55.471996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.584 [2024-07-24 20:08:55.472027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.584 qpair failed and we were unable to recover it. 00:29:07.584 [2024-07-24 20:08:55.472426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.584 [2024-07-24 20:08:55.472459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.584 qpair failed and we were unable to recover it. 00:29:07.584 [2024-07-24 20:08:55.472846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.584 [2024-07-24 20:08:55.472878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.584 qpair failed and we were unable to recover it. 00:29:07.584 [2024-07-24 20:08:55.473163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.584 [2024-07-24 20:08:55.473193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.584 qpair failed and we were unable to recover it. 00:29:07.584 [2024-07-24 20:08:55.473690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.584 [2024-07-24 20:08:55.473720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.584 qpair failed and we were unable to recover it. 00:29:07.584 [2024-07-24 20:08:55.474237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.584 [2024-07-24 20:08:55.474270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.584 qpair failed and we were unable to recover it. 00:29:07.584 [2024-07-24 20:08:55.474769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.584 [2024-07-24 20:08:55.474800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.584 qpair failed and we were unable to recover it. 00:29:07.584 [2024-07-24 20:08:55.475405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.584 [2024-07-24 20:08:55.475510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.584 qpair failed and we were unable to recover it. 00:29:07.584 [2024-07-24 20:08:55.476103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.584 [2024-07-24 20:08:55.476141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.584 qpair failed and we were unable to recover it. 00:29:07.585 [2024-07-24 20:08:55.476473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.585 [2024-07-24 20:08:55.476509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.585 qpair failed and we were unable to recover it. 00:29:07.585 [2024-07-24 20:08:55.477008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.585 [2024-07-24 20:08:55.477038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.585 qpair failed and we were unable to recover it. 00:29:07.585 [2024-07-24 20:08:55.477434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.585 [2024-07-24 20:08:55.477467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.585 qpair failed and we were unable to recover it. 00:29:07.585 [2024-07-24 20:08:55.477995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.585 [2024-07-24 20:08:55.478028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.585 qpair failed and we were unable to recover it. 00:29:07.585 [2024-07-24 20:08:55.478511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.585 [2024-07-24 20:08:55.478543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.585 qpair failed and we were unable to recover it. 00:29:07.585 [2024-07-24 20:08:55.478994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.585 [2024-07-24 20:08:55.479025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.585 qpair failed and we were unable to recover it. 00:29:07.585 [2024-07-24 20:08:55.479194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.585 [2024-07-24 20:08:55.479243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.585 qpair failed and we were unable to recover it. 00:29:07.585 [2024-07-24 20:08:55.479749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.585 [2024-07-24 20:08:55.479779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.585 qpair failed and we were unable to recover it. 00:29:07.585 [2024-07-24 20:08:55.480468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.585 20:08:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.585 [2024-07-24 20:08:55.480574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.585 qpair failed and we were unable to recover it. 00:29:07.585 20:08:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:07.585 [2024-07-24 20:08:55.481183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.585 [2024-07-24 20:08:55.481242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.585 qpair failed and we were unable to recover it. 00:29:07.585 20:08:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.585 20:08:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.585 [2024-07-24 20:08:55.481750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.585 [2024-07-24 20:08:55.481783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.585 qpair failed and we were unable to recover it. 00:29:07.585 [2024-07-24 20:08:55.482412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.585 [2024-07-24 20:08:55.482518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.585 qpair failed and we were unable to recover it. 00:29:07.585 [2024-07-24 20:08:55.483088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.585 [2024-07-24 20:08:55.483127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.585 qpair failed and we were unable to recover it. 00:29:07.585 [2024-07-24 20:08:55.483594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.585 [2024-07-24 20:08:55.483627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.585 qpair failed and we were unable to recover it. 00:29:07.585 [2024-07-24 20:08:55.484112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.585 [2024-07-24 20:08:55.484144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.585 qpair failed and we were unable to recover it. 00:29:07.585 [2024-07-24 20:08:55.484623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.585 [2024-07-24 20:08:55.484655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.585 qpair failed and we were unable to recover it. 00:29:07.585 [2024-07-24 20:08:55.485140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.585 [2024-07-24 20:08:55.485171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.585 qpair failed and we were unable to recover it. 00:29:07.585 [2024-07-24 20:08:55.485497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.585 [2024-07-24 20:08:55.485539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.585 qpair failed and we were unable to recover it. 00:29:07.585 [2024-07-24 20:08:55.486049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.585 [2024-07-24 20:08:55.486080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.585 qpair failed and we were unable to recover it. 00:29:07.585 [2024-07-24 20:08:55.486459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.585 [2024-07-24 20:08:55.486495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.585 qpair failed and we were unable to recover it. 00:29:07.585 [2024-07-24 20:08:55.486858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.585 [2024-07-24 20:08:55.486896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.585 qpair failed and we were unable to recover it. 00:29:07.585 [2024-07-24 20:08:55.487384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.585 [2024-07-24 20:08:55.487418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.585 qpair failed and we were unable to recover it. 00:29:07.585 [2024-07-24 20:08:55.487883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.585 [2024-07-24 20:08:55.487913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.585 qpair failed and we were unable to recover it. 00:29:07.585 [2024-07-24 20:08:55.488414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.585 [2024-07-24 20:08:55.488446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.585 qpair failed and we were unable to recover it. 00:29:07.585 [2024-07-24 20:08:55.488616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.585 [2024-07-24 20:08:55.488646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.585 qpair failed and we were unable to recover it. 00:29:07.585 [2024-07-24 20:08:55.489150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.585 [2024-07-24 20:08:55.489182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.585 qpair failed and we were unable to recover it. 00:29:07.585 [2024-07-24 20:08:55.489667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.585 [2024-07-24 20:08:55.489698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.585 qpair failed and we were unable to recover it. 00:29:07.585 [2024-07-24 20:08:55.490197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.585 [2024-07-24 20:08:55.490240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.585 qpair failed and we were unable to recover it. 00:29:07.585 [2024-07-24 20:08:55.490736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.585 [2024-07-24 20:08:55.490767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.585 qpair failed and we were unable to recover it. 00:29:07.585 [2024-07-24 20:08:55.491482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.585 [2024-07-24 20:08:55.491587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.585 qpair failed and we were unable to recover it. 00:29:07.585 [2024-07-24 20:08:55.492147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.585 [2024-07-24 20:08:55.492186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 20:08:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.586 [2024-07-24 20:08:55.492718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-07-24 20:08:55.492752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 20:08:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:07.586 [2024-07-24 20:08:55.493123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-07-24 20:08:55.493159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 20:08:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.586 [2024-07-24 20:08:55.493589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 20:08:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.586 [2024-07-24 20:08:55.493622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-07-24 20:08:55.493997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-07-24 20:08:55.494028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-07-24 20:08:55.494442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-07-24 20:08:55.494547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-07-24 20:08:55.495103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-07-24 20:08:55.495155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-07-24 20:08:55.495600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-07-24 20:08:55.495634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-07-24 20:08:55.496158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-07-24 20:08:55.496189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-07-24 20:08:55.496665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-07-24 20:08:55.496698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-07-24 20:08:55.497173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-07-24 20:08:55.497214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-07-24 20:08:55.497700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-07-24 20:08:55.497730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-07-24 20:08:55.498136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-07-24 20:08:55.498167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-07-24 20:08:55.498560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-07-24 20:08:55.498591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-07-24 20:08:55.499077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-07-24 20:08:55.499109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-07-24 20:08:55.499581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-07-24 20:08:55.499615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe3e4000b90 with addr=10.0.0.2, port=4420 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.586 [2024-07-24 20:08:55.499925] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:07.586 20:08:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.586 20:08:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:07.586 20:08:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.586 20:08:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.586 [2024-07-24 20:08:55.510797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.586 [2024-07-24 20:08:55.511058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.586 [2024-07-24 20:08:55.511117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.586 [2024-07-24 20:08:55.511151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.586 [2024-07-24 20:08:55.511174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:07.586 [2024-07-24 20:08:55.511240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.586 qpair failed and we were unable to recover it. 00:29:07.850 20:08:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.850 20:08:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3861809 00:29:07.850 [2024-07-24 20:08:55.520681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.850 [2024-07-24 20:08:55.520866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.850 [2024-07-24 20:08:55.520907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.850 [2024-07-24 20:08:55.520923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.850 [2024-07-24 20:08:55.520938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:07.850 [2024-07-24 20:08:55.520975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-07-24 20:08:55.530654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.850 [2024-07-24 20:08:55.530805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.850 [2024-07-24 20:08:55.530854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.850 [2024-07-24 20:08:55.530870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.850 [2024-07-24 20:08:55.530880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:07.850 [2024-07-24 20:08:55.530916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-07-24 20:08:55.540581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.850 [2024-07-24 20:08:55.540708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.850 [2024-07-24 20:08:55.540749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.850 [2024-07-24 20:08:55.540760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.850 [2024-07-24 20:08:55.540768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:07.850 [2024-07-24 20:08:55.540798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-07-24 20:08:55.550595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.850 [2024-07-24 20:08:55.550728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.850 [2024-07-24 20:08:55.550770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.850 [2024-07-24 20:08:55.550782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.850 [2024-07-24 20:08:55.550795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:07.850 [2024-07-24 20:08:55.550825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-07-24 20:08:55.560649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.850 [2024-07-24 20:08:55.560754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.850 [2024-07-24 20:08:55.560785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.850 [2024-07-24 20:08:55.560795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.850 [2024-07-24 20:08:55.560802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:07.850 [2024-07-24 20:08:55.560826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-07-24 20:08:55.570683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.850 [2024-07-24 20:08:55.570797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.850 [2024-07-24 20:08:55.570825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.850 [2024-07-24 20:08:55.570834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.850 [2024-07-24 20:08:55.570841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:07.850 [2024-07-24 20:08:55.570865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-07-24 20:08:55.580678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.850 [2024-07-24 20:08:55.580800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.850 [2024-07-24 20:08:55.580830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.850 [2024-07-24 20:08:55.580839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.850 [2024-07-24 20:08:55.580846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:07.850 [2024-07-24 20:08:55.580869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-07-24 20:08:55.590713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.850 [2024-07-24 20:08:55.590835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.850 [2024-07-24 20:08:55.590863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.850 [2024-07-24 20:08:55.590872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.850 [2024-07-24 20:08:55.590879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:07.850 [2024-07-24 20:08:55.590901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-07-24 20:08:55.600720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.850 [2024-07-24 20:08:55.600826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.850 [2024-07-24 20:08:55.600856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.850 [2024-07-24 20:08:55.600865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.850 [2024-07-24 20:08:55.600872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:07.850 [2024-07-24 20:08:55.600894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-07-24 20:08:55.610733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.850 [2024-07-24 20:08:55.610845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.850 [2024-07-24 20:08:55.610874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.850 [2024-07-24 20:08:55.610883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.850 [2024-07-24 20:08:55.610890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:07.850 [2024-07-24 20:08:55.610912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-07-24 20:08:55.620776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.850 [2024-07-24 20:08:55.620878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.850 [2024-07-24 20:08:55.620907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.850 [2024-07-24 20:08:55.620916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.850 [2024-07-24 20:08:55.620922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:07.850 [2024-07-24 20:08:55.620945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-07-24 20:08:55.630831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.850 [2024-07-24 20:08:55.630943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.850 [2024-07-24 20:08:55.630972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.850 [2024-07-24 20:08:55.630981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.850 [2024-07-24 20:08:55.630988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:07.851 [2024-07-24 20:08:55.631009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-07-24 20:08:55.640838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.851 [2024-07-24 20:08:55.640948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.851 [2024-07-24 20:08:55.640977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.851 [2024-07-24 20:08:55.640993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.851 [2024-07-24 20:08:55.641000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:07.851 [2024-07-24 20:08:55.641021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-07-24 20:08:55.650905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.851 [2024-07-24 20:08:55.651060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.851 [2024-07-24 20:08:55.651088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.851 [2024-07-24 20:08:55.651097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.851 [2024-07-24 20:08:55.651105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:07.851 [2024-07-24 20:08:55.651125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-07-24 20:08:55.660913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.851 [2024-07-24 20:08:55.661024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.851 [2024-07-24 20:08:55.661053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.851 [2024-07-24 20:08:55.661063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.851 [2024-07-24 20:08:55.661070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:07.851 [2024-07-24 20:08:55.661092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-07-24 20:08:55.670980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.851 [2024-07-24 20:08:55.671098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.851 [2024-07-24 20:08:55.671127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.851 [2024-07-24 20:08:55.671136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.851 [2024-07-24 20:08:55.671144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:07.851 [2024-07-24 20:08:55.671166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-07-24 20:08:55.680962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.851 [2024-07-24 20:08:55.681067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.851 [2024-07-24 20:08:55.681095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.851 [2024-07-24 20:08:55.681105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.851 [2024-07-24 20:08:55.681112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:07.851 [2024-07-24 20:08:55.681134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-07-24 20:08:55.691012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.851 [2024-07-24 20:08:55.691128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.851 [2024-07-24 20:08:55.691157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.851 [2024-07-24 20:08:55.691166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.851 [2024-07-24 20:08:55.691173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:07.851 [2024-07-24 20:08:55.691194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-07-24 20:08:55.701014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.851 [2024-07-24 20:08:55.701119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.851 [2024-07-24 20:08:55.701148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.851 [2024-07-24 20:08:55.701156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.851 [2024-07-24 20:08:55.701164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:07.851 [2024-07-24 20:08:55.701185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-07-24 20:08:55.711075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.851 [2024-07-24 20:08:55.711188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.851 [2024-07-24 20:08:55.711222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.851 [2024-07-24 20:08:55.711231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.851 [2024-07-24 20:08:55.711238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:07.851 [2024-07-24 20:08:55.711261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-07-24 20:08:55.721101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.851 [2024-07-24 20:08:55.721241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.851 [2024-07-24 20:08:55.721270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.851 [2024-07-24 20:08:55.721279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.851 [2024-07-24 20:08:55.721286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:07.851 [2024-07-24 20:08:55.721309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-07-24 20:08:55.731159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.851 [2024-07-24 20:08:55.731276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.851 [2024-07-24 20:08:55.731305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.851 [2024-07-24 20:08:55.731321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.851 [2024-07-24 20:08:55.731328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:07.851 [2024-07-24 20:08:55.731350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-07-24 20:08:55.741122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.851 [2024-07-24 20:08:55.741243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.851 [2024-07-24 20:08:55.741271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.851 [2024-07-24 20:08:55.741281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.851 [2024-07-24 20:08:55.741288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:07.851 [2024-07-24 20:08:55.741309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-07-24 20:08:55.751222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.851 [2024-07-24 20:08:55.751334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.851 [2024-07-24 20:08:55.751362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.851 [2024-07-24 20:08:55.751372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.851 [2024-07-24 20:08:55.751379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:07.851 [2024-07-24 20:08:55.751400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-07-24 20:08:55.761375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.851 [2024-07-24 20:08:55.761500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.851 [2024-07-24 20:08:55.761529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.852 [2024-07-24 20:08:55.761538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.852 [2024-07-24 20:08:55.761545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:07.852 [2024-07-24 20:08:55.761567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.852 qpair failed and we were unable to recover it. 00:29:07.852 [2024-07-24 20:08:55.771216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.852 [2024-07-24 20:08:55.771319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.852 [2024-07-24 20:08:55.771349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.852 [2024-07-24 20:08:55.771358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.852 [2024-07-24 20:08:55.771365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:07.852 [2024-07-24 20:08:55.771387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.852 qpair failed and we were unable to recover it. 00:29:07.852 [2024-07-24 20:08:55.781349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.852 [2024-07-24 20:08:55.781470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.852 [2024-07-24 20:08:55.781499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.852 [2024-07-24 20:08:55.781508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.852 [2024-07-24 20:08:55.781515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:07.852 [2024-07-24 20:08:55.781537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.852 qpair failed and we were unable to recover it. 00:29:07.852 [2024-07-24 20:08:55.791425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.852 [2024-07-24 20:08:55.791544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.852 [2024-07-24 20:08:55.791574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.852 [2024-07-24 20:08:55.791583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.852 [2024-07-24 20:08:55.791590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:07.852 [2024-07-24 20:08:55.791611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.852 qpair failed and we were unable to recover it. 00:29:08.115 [2024-07-24 20:08:55.801350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.115 [2024-07-24 20:08:55.801472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.115 [2024-07-24 20:08:55.801502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.115 [2024-07-24 20:08:55.801511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.115 [2024-07-24 20:08:55.801518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.115 [2024-07-24 20:08:55.801539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.115 qpair failed and we were unable to recover it. 00:29:08.115 [2024-07-24 20:08:55.811397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.115 [2024-07-24 20:08:55.811503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.115 [2024-07-24 20:08:55.811532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.115 [2024-07-24 20:08:55.811542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.115 [2024-07-24 20:08:55.811549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.115 [2024-07-24 20:08:55.811571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.115 qpair failed and we were unable to recover it. 00:29:08.115 [2024-07-24 20:08:55.821361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.115 [2024-07-24 20:08:55.821467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.115 [2024-07-24 20:08:55.821502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.115 [2024-07-24 20:08:55.821512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.115 [2024-07-24 20:08:55.821519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.115 [2024-07-24 20:08:55.821540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.115 qpair failed and we were unable to recover it. 00:29:08.115 [2024-07-24 20:08:55.831446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.115 [2024-07-24 20:08:55.831569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.116 [2024-07-24 20:08:55.831598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.116 [2024-07-24 20:08:55.831607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.116 [2024-07-24 20:08:55.831615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.116 [2024-07-24 20:08:55.831636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.116 qpair failed and we were unable to recover it. 00:29:08.116 [2024-07-24 20:08:55.841412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.116 [2024-07-24 20:08:55.841515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.116 [2024-07-24 20:08:55.841544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.116 [2024-07-24 20:08:55.841553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.116 [2024-07-24 20:08:55.841561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.116 [2024-07-24 20:08:55.841582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.116 qpair failed and we were unable to recover it. 00:29:08.116 [2024-07-24 20:08:55.851476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.116 [2024-07-24 20:08:55.851588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.116 [2024-07-24 20:08:55.851617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.116 [2024-07-24 20:08:55.851626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.116 [2024-07-24 20:08:55.851633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.116 [2024-07-24 20:08:55.851655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.116 qpair failed and we were unable to recover it. 00:29:08.116 [2024-07-24 20:08:55.861513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.116 [2024-07-24 20:08:55.861632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.116 [2024-07-24 20:08:55.861661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.116 [2024-07-24 20:08:55.861670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.116 [2024-07-24 20:08:55.861677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.116 [2024-07-24 20:08:55.861705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.116 qpair failed and we were unable to recover it. 00:29:08.116 [2024-07-24 20:08:55.871590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.116 [2024-07-24 20:08:55.871709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.116 [2024-07-24 20:08:55.871738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.116 [2024-07-24 20:08:55.871748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.116 [2024-07-24 20:08:55.871755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.116 [2024-07-24 20:08:55.871776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.116 qpair failed and we were unable to recover it. 00:29:08.116 [2024-07-24 20:08:55.881585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.116 [2024-07-24 20:08:55.881706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.116 [2024-07-24 20:08:55.881747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.116 [2024-07-24 20:08:55.881757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.116 [2024-07-24 20:08:55.881765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.116 [2024-07-24 20:08:55.881794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.116 qpair failed and we were unable to recover it. 00:29:08.116 [2024-07-24 20:08:55.891535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.116 [2024-07-24 20:08:55.891640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.116 [2024-07-24 20:08:55.891671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.116 [2024-07-24 20:08:55.891680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.116 [2024-07-24 20:08:55.891688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.116 [2024-07-24 20:08:55.891711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.116 qpair failed and we were unable to recover it. 00:29:08.116 [2024-07-24 20:08:55.901635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.116 [2024-07-24 20:08:55.901738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.116 [2024-07-24 20:08:55.901768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.116 [2024-07-24 20:08:55.901777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.116 [2024-07-24 20:08:55.901784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.116 [2024-07-24 20:08:55.901806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.116 qpair failed and we were unable to recover it. 00:29:08.116 [2024-07-24 20:08:55.911664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.116 [2024-07-24 20:08:55.911792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.116 [2024-07-24 20:08:55.911840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.116 [2024-07-24 20:08:55.911852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.116 [2024-07-24 20:08:55.911859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.116 [2024-07-24 20:08:55.911887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.116 qpair failed and we were unable to recover it. 00:29:08.116 [2024-07-24 20:08:55.921683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.116 [2024-07-24 20:08:55.921805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.116 [2024-07-24 20:08:55.921846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.116 [2024-07-24 20:08:55.921860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.116 [2024-07-24 20:08:55.921867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.116 [2024-07-24 20:08:55.921895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.116 qpair failed and we were unable to recover it. 00:29:08.116 [2024-07-24 20:08:55.931723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.116 [2024-07-24 20:08:55.931842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.116 [2024-07-24 20:08:55.931883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.116 [2024-07-24 20:08:55.931893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.116 [2024-07-24 20:08:55.931900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.116 [2024-07-24 20:08:55.931929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.116 qpair failed and we were unable to recover it. 00:29:08.116 [2024-07-24 20:08:55.941768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.116 [2024-07-24 20:08:55.941888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.116 [2024-07-24 20:08:55.941918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.116 [2024-07-24 20:08:55.941928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.116 [2024-07-24 20:08:55.941935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.116 [2024-07-24 20:08:55.941959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.116 qpair failed and we were unable to recover it. 00:29:08.116 [2024-07-24 20:08:55.951806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.116 [2024-07-24 20:08:55.951929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.116 [2024-07-24 20:08:55.951970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.116 [2024-07-24 20:08:55.951981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.116 [2024-07-24 20:08:55.951995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.116 [2024-07-24 20:08:55.952023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.116 qpair failed and we were unable to recover it. 00:29:08.116 [2024-07-24 20:08:55.961793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.116 [2024-07-24 20:08:55.962006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.116 [2024-07-24 20:08:55.962046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.116 [2024-07-24 20:08:55.962057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.117 [2024-07-24 20:08:55.962064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.117 [2024-07-24 20:08:55.962093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.117 qpair failed and we were unable to recover it. 00:29:08.117 [2024-07-24 20:08:55.971839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.117 [2024-07-24 20:08:55.971960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.117 [2024-07-24 20:08:55.971991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.117 [2024-07-24 20:08:55.971999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.117 [2024-07-24 20:08:55.972007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.117 [2024-07-24 20:08:55.972030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.117 qpair failed and we were unable to recover it. 00:29:08.117 [2024-07-24 20:08:55.981868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.117 [2024-07-24 20:08:55.982095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.117 [2024-07-24 20:08:55.982136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.117 [2024-07-24 20:08:55.982147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.117 [2024-07-24 20:08:55.982154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.117 [2024-07-24 20:08:55.982182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.117 qpair failed and we were unable to recover it. 00:29:08.117 [2024-07-24 20:08:55.991930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.117 [2024-07-24 20:08:55.992043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.117 [2024-07-24 20:08:55.992074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.117 [2024-07-24 20:08:55.992084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.117 [2024-07-24 20:08:55.992091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.117 [2024-07-24 20:08:55.992114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.117 qpair failed and we were unable to recover it. 00:29:08.117 [2024-07-24 20:08:56.001911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.117 [2024-07-24 20:08:56.002022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.117 [2024-07-24 20:08:56.002053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.117 [2024-07-24 20:08:56.002062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.117 [2024-07-24 20:08:56.002069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.117 [2024-07-24 20:08:56.002091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.117 qpair failed and we were unable to recover it. 00:29:08.117 [2024-07-24 20:08:56.012000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.117 [2024-07-24 20:08:56.012143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.117 [2024-07-24 20:08:56.012171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.117 [2024-07-24 20:08:56.012180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.117 [2024-07-24 20:08:56.012187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.117 [2024-07-24 20:08:56.012214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.117 qpair failed and we were unable to recover it. 00:29:08.117 [2024-07-24 20:08:56.022009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.117 [2024-07-24 20:08:56.022117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.117 [2024-07-24 20:08:56.022146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.117 [2024-07-24 20:08:56.022155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.117 [2024-07-24 20:08:56.022162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.117 [2024-07-24 20:08:56.022184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.117 qpair failed and we were unable to recover it. 00:29:08.117 [2024-07-24 20:08:56.032054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.117 [2024-07-24 20:08:56.032169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.117 [2024-07-24 20:08:56.032198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.117 [2024-07-24 20:08:56.032213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.117 [2024-07-24 20:08:56.032220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.117 [2024-07-24 20:08:56.032243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.117 qpair failed and we were unable to recover it. 00:29:08.117 [2024-07-24 20:08:56.041973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.117 [2024-07-24 20:08:56.042092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.117 [2024-07-24 20:08:56.042120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.117 [2024-07-24 20:08:56.042137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.117 [2024-07-24 20:08:56.042145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.117 [2024-07-24 20:08:56.042166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.117 qpair failed and we were unable to recover it. 00:29:08.117 [2024-07-24 20:08:56.052093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.117 [2024-07-24 20:08:56.052205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.117 [2024-07-24 20:08:56.052234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.117 [2024-07-24 20:08:56.052243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.117 [2024-07-24 20:08:56.052250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.117 [2024-07-24 20:08:56.052272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.117 qpair failed and we were unable to recover it. 00:29:08.117 [2024-07-24 20:08:56.062012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.117 [2024-07-24 20:08:56.062124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.117 [2024-07-24 20:08:56.062155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.117 [2024-07-24 20:08:56.062164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.117 [2024-07-24 20:08:56.062171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.117 [2024-07-24 20:08:56.062194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.117 qpair failed and we were unable to recover it. 00:29:08.380 [2024-07-24 20:08:56.072116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.380 [2024-07-24 20:08:56.072236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.380 [2024-07-24 20:08:56.072264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.380 [2024-07-24 20:08:56.072273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.380 [2024-07-24 20:08:56.072280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.380 [2024-07-24 20:08:56.072301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.380 qpair failed and we were unable to recover it. 00:29:08.380 [2024-07-24 20:08:56.082170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.380 [2024-07-24 20:08:56.082285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.380 [2024-07-24 20:08:56.082315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.380 [2024-07-24 20:08:56.082323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.380 [2024-07-24 20:08:56.082331] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.380 [2024-07-24 20:08:56.082352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.380 qpair failed and we were unable to recover it. 00:29:08.380 [2024-07-24 20:08:56.092221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.380 [2024-07-24 20:08:56.092361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.380 [2024-07-24 20:08:56.092389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.380 [2024-07-24 20:08:56.092398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.380 [2024-07-24 20:08:56.092405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.380 [2024-07-24 20:08:56.092427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.380 qpair failed and we were unable to recover it. 00:29:08.380 [2024-07-24 20:08:56.102251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.380 [2024-07-24 20:08:56.102361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.380 [2024-07-24 20:08:56.102389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.380 [2024-07-24 20:08:56.102399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.380 [2024-07-24 20:08:56.102406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.380 [2024-07-24 20:08:56.102427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.380 qpair failed and we were unable to recover it. 00:29:08.380 [2024-07-24 20:08:56.112271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.380 [2024-07-24 20:08:56.112395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.380 [2024-07-24 20:08:56.112423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.380 [2024-07-24 20:08:56.112433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.380 [2024-07-24 20:08:56.112440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.380 [2024-07-24 20:08:56.112461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.380 qpair failed and we were unable to recover it. 00:29:08.381 [2024-07-24 20:08:56.122287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.381 [2024-07-24 20:08:56.122398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.381 [2024-07-24 20:08:56.122426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.381 [2024-07-24 20:08:56.122436] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.381 [2024-07-24 20:08:56.122443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.381 [2024-07-24 20:08:56.122465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.381 qpair failed and we were unable to recover it. 00:29:08.381 [2024-07-24 20:08:56.132344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.381 [2024-07-24 20:08:56.132470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.381 [2024-07-24 20:08:56.132499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.381 [2024-07-24 20:08:56.132514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.381 [2024-07-24 20:08:56.132521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.381 [2024-07-24 20:08:56.132542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.381 qpair failed and we were unable to recover it. 00:29:08.381 [2024-07-24 20:08:56.142377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.381 [2024-07-24 20:08:56.142487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.381 [2024-07-24 20:08:56.142515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.381 [2024-07-24 20:08:56.142524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.381 [2024-07-24 20:08:56.142531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.381 [2024-07-24 20:08:56.142552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.381 qpair failed and we were unable to recover it. 00:29:08.381 [2024-07-24 20:08:56.152409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.381 [2024-07-24 20:08:56.152529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.381 [2024-07-24 20:08:56.152557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.381 [2024-07-24 20:08:56.152566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.381 [2024-07-24 20:08:56.152573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.381 [2024-07-24 20:08:56.152594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.381 qpair failed and we were unable to recover it. 00:29:08.381 [2024-07-24 20:08:56.162436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.381 [2024-07-24 20:08:56.162547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.381 [2024-07-24 20:08:56.162575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.381 [2024-07-24 20:08:56.162584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.381 [2024-07-24 20:08:56.162591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.381 [2024-07-24 20:08:56.162612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.381 qpair failed and we were unable to recover it. 00:29:08.381 [2024-07-24 20:08:56.172375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.381 [2024-07-24 20:08:56.172488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.381 [2024-07-24 20:08:56.172516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.381 [2024-07-24 20:08:56.172525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.381 [2024-07-24 20:08:56.172533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.381 [2024-07-24 20:08:56.172555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.381 qpair failed and we were unable to recover it. 00:29:08.381 [2024-07-24 20:08:56.182483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.381 [2024-07-24 20:08:56.182593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.381 [2024-07-24 20:08:56.182623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.381 [2024-07-24 20:08:56.182632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.381 [2024-07-24 20:08:56.182640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.381 [2024-07-24 20:08:56.182662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.381 qpair failed and we were unable to recover it. 00:29:08.381 [2024-07-24 20:08:56.192525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.381 [2024-07-24 20:08:56.192644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.381 [2024-07-24 20:08:56.192673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.381 [2024-07-24 20:08:56.192683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.381 [2024-07-24 20:08:56.192689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.381 [2024-07-24 20:08:56.192711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.381 qpair failed and we were unable to recover it. 00:29:08.381 [2024-07-24 20:08:56.202565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.381 [2024-07-24 20:08:56.202671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.381 [2024-07-24 20:08:56.202699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.381 [2024-07-24 20:08:56.202708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.381 [2024-07-24 20:08:56.202715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.381 [2024-07-24 20:08:56.202737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.381 qpair failed and we were unable to recover it. 00:29:08.381 [2024-07-24 20:08:56.212634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.381 [2024-07-24 20:08:56.212764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.381 [2024-07-24 20:08:56.212793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.381 [2024-07-24 20:08:56.212802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.381 [2024-07-24 20:08:56.212808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.381 [2024-07-24 20:08:56.212829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.381 qpair failed and we were unable to recover it. 00:29:08.381 [2024-07-24 20:08:56.222620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.381 [2024-07-24 20:08:56.222720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.381 [2024-07-24 20:08:56.222755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.381 [2024-07-24 20:08:56.222764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.381 [2024-07-24 20:08:56.222771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.381 [2024-07-24 20:08:56.222792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.381 qpair failed and we were unable to recover it. 00:29:08.381 [2024-07-24 20:08:56.232538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.381 [2024-07-24 20:08:56.232655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.381 [2024-07-24 20:08:56.232683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.381 [2024-07-24 20:08:56.232693] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.381 [2024-07-24 20:08:56.232700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.381 [2024-07-24 20:08:56.232721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.381 qpair failed and we were unable to recover it. 00:29:08.381 [2024-07-24 20:08:56.242627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.381 [2024-07-24 20:08:56.242734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.381 [2024-07-24 20:08:56.242765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.381 [2024-07-24 20:08:56.242774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.382 [2024-07-24 20:08:56.242782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.382 [2024-07-24 20:08:56.242803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.382 qpair failed and we were unable to recover it. 00:29:08.382 [2024-07-24 20:08:56.252722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.382 [2024-07-24 20:08:56.252835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.382 [2024-07-24 20:08:56.252868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.382 [2024-07-24 20:08:56.252879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.382 [2024-07-24 20:08:56.252885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.382 [2024-07-24 20:08:56.252908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.382 qpair failed and we were unable to recover it. 00:29:08.382 [2024-07-24 20:08:56.262720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.382 [2024-07-24 20:08:56.262833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.382 [2024-07-24 20:08:56.262873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.382 [2024-07-24 20:08:56.262884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.382 [2024-07-24 20:08:56.262891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.382 [2024-07-24 20:08:56.262928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.382 qpair failed and we were unable to recover it. 00:29:08.382 [2024-07-24 20:08:56.272747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.382 [2024-07-24 20:08:56.272880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.382 [2024-07-24 20:08:56.272921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.382 [2024-07-24 20:08:56.272932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.382 [2024-07-24 20:08:56.272940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.382 [2024-07-24 20:08:56.272969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.382 qpair failed and we were unable to recover it. 00:29:08.382 [2024-07-24 20:08:56.282798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.382 [2024-07-24 20:08:56.283021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.382 [2024-07-24 20:08:56.283062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.382 [2024-07-24 20:08:56.283072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.382 [2024-07-24 20:08:56.283079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.382 [2024-07-24 20:08:56.283108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.382 qpair failed and we were unable to recover it. 00:29:08.382 [2024-07-24 20:08:56.292796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.382 [2024-07-24 20:08:56.292912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.382 [2024-07-24 20:08:56.292944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.382 [2024-07-24 20:08:56.292953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.382 [2024-07-24 20:08:56.292960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.382 [2024-07-24 20:08:56.292982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.382 qpair failed and we were unable to recover it. 00:29:08.382 [2024-07-24 20:08:56.302854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.382 [2024-07-24 20:08:56.302959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.382 [2024-07-24 20:08:56.302988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.382 [2024-07-24 20:08:56.302997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.382 [2024-07-24 20:08:56.303004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.382 [2024-07-24 20:08:56.303027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.382 qpair failed and we were unable to recover it. 00:29:08.382 [2024-07-24 20:08:56.312890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.382 [2024-07-24 20:08:56.313006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.382 [2024-07-24 20:08:56.313042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.382 [2024-07-24 20:08:56.313051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.382 [2024-07-24 20:08:56.313059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.382 [2024-07-24 20:08:56.313080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.382 qpair failed and we were unable to recover it. 00:29:08.382 [2024-07-24 20:08:56.322928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.382 [2024-07-24 20:08:56.323032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.382 [2024-07-24 20:08:56.323063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.382 [2024-07-24 20:08:56.323072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.382 [2024-07-24 20:08:56.323079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.382 [2024-07-24 20:08:56.323101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.382 qpair failed and we were unable to recover it. 00:29:08.645 [2024-07-24 20:08:56.332955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.645 [2024-07-24 20:08:56.333056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.645 [2024-07-24 20:08:56.333086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.645 [2024-07-24 20:08:56.333094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.645 [2024-07-24 20:08:56.333101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.645 [2024-07-24 20:08:56.333125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-07-24 20:08:56.342954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.645 [2024-07-24 20:08:56.343062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.645 [2024-07-24 20:08:56.343092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.645 [2024-07-24 20:08:56.343101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.645 [2024-07-24 20:08:56.343108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.645 [2024-07-24 20:08:56.343129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-07-24 20:08:56.352991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.645 [2024-07-24 20:08:56.353109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.645 [2024-07-24 20:08:56.353139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.645 [2024-07-24 20:08:56.353148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.645 [2024-07-24 20:08:56.353163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.645 [2024-07-24 20:08:56.353184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-07-24 20:08:56.362972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.645 [2024-07-24 20:08:56.363086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.645 [2024-07-24 20:08:56.363117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.645 [2024-07-24 20:08:56.363125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.645 [2024-07-24 20:08:56.363132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.645 [2024-07-24 20:08:56.363155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-07-24 20:08:56.373093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.645 [2024-07-24 20:08:56.373211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.645 [2024-07-24 20:08:56.373241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.645 [2024-07-24 20:08:56.373252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.645 [2024-07-24 20:08:56.373259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.645 [2024-07-24 20:08:56.373280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-07-24 20:08:56.383093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.645 [2024-07-24 20:08:56.383229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.645 [2024-07-24 20:08:56.383258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.645 [2024-07-24 20:08:56.383267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.645 [2024-07-24 20:08:56.383276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.645 [2024-07-24 20:08:56.383298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-07-24 20:08:56.393143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.645 [2024-07-24 20:08:56.393284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.645 [2024-07-24 20:08:56.393315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.645 [2024-07-24 20:08:56.393329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.645 [2024-07-24 20:08:56.393336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.645 [2024-07-24 20:08:56.393359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-07-24 20:08:56.403189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.645 [2024-07-24 20:08:56.403307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.645 [2024-07-24 20:08:56.403337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.645 [2024-07-24 20:08:56.403346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.645 [2024-07-24 20:08:56.403353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.645 [2024-07-24 20:08:56.403375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.645 qpair failed and we were unable to recover it. 00:29:08.645 [2024-07-24 20:08:56.413194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.645 [2024-07-24 20:08:56.413322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.646 [2024-07-24 20:08:56.413351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.646 [2024-07-24 20:08:56.413359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.646 [2024-07-24 20:08:56.413368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.646 [2024-07-24 20:08:56.413389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-07-24 20:08:56.423241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.646 [2024-07-24 20:08:56.423349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.646 [2024-07-24 20:08:56.423378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.646 [2024-07-24 20:08:56.423388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.646 [2024-07-24 20:08:56.423395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.646 [2024-07-24 20:08:56.423417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-07-24 20:08:56.433259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.646 [2024-07-24 20:08:56.433372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.646 [2024-07-24 20:08:56.433400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.646 [2024-07-24 20:08:56.433410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.646 [2024-07-24 20:08:56.433417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.646 [2024-07-24 20:08:56.433438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-07-24 20:08:56.443252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.646 [2024-07-24 20:08:56.443354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.646 [2024-07-24 20:08:56.443382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.646 [2024-07-24 20:08:56.443390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.646 [2024-07-24 20:08:56.443405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.646 [2024-07-24 20:08:56.443427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-07-24 20:08:56.453316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.646 [2024-07-24 20:08:56.453421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.646 [2024-07-24 20:08:56.453450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.646 [2024-07-24 20:08:56.453460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.646 [2024-07-24 20:08:56.453467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.646 [2024-07-24 20:08:56.453488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-07-24 20:08:56.463334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.646 [2024-07-24 20:08:56.463447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.646 [2024-07-24 20:08:56.463475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.646 [2024-07-24 20:08:56.463484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.646 [2024-07-24 20:08:56.463491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.646 [2024-07-24 20:08:56.463513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-07-24 20:08:56.473348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.646 [2024-07-24 20:08:56.473458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.646 [2024-07-24 20:08:56.473481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.646 [2024-07-24 20:08:56.473490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.646 [2024-07-24 20:08:56.473499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.646 [2024-07-24 20:08:56.473518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-07-24 20:08:56.483349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.646 [2024-07-24 20:08:56.483466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.646 [2024-07-24 20:08:56.483496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.646 [2024-07-24 20:08:56.483505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.646 [2024-07-24 20:08:56.483513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.646 [2024-07-24 20:08:56.483534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-07-24 20:08:56.493434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.646 [2024-07-24 20:08:56.493545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.646 [2024-07-24 20:08:56.493576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.646 [2024-07-24 20:08:56.493585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.646 [2024-07-24 20:08:56.493593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.646 [2024-07-24 20:08:56.493616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-07-24 20:08:56.503466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.646 [2024-07-24 20:08:56.503586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.646 [2024-07-24 20:08:56.503616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.646 [2024-07-24 20:08:56.503625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.646 [2024-07-24 20:08:56.503633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.646 [2024-07-24 20:08:56.503654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-07-24 20:08:56.513499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.646 [2024-07-24 20:08:56.513624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.646 [2024-07-24 20:08:56.513652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.646 [2024-07-24 20:08:56.513661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.646 [2024-07-24 20:08:56.513668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.646 [2024-07-24 20:08:56.513690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-07-24 20:08:56.523476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.646 [2024-07-24 20:08:56.523705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.646 [2024-07-24 20:08:56.523735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.646 [2024-07-24 20:08:56.523743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.646 [2024-07-24 20:08:56.523750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.646 [2024-07-24 20:08:56.523772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-07-24 20:08:56.533523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.646 [2024-07-24 20:08:56.533636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.646 [2024-07-24 20:08:56.533666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.646 [2024-07-24 20:08:56.533683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.646 [2024-07-24 20:08:56.533690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.646 [2024-07-24 20:08:56.533712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.646 qpair failed and we were unable to recover it. 00:29:08.646 [2024-07-24 20:08:56.543573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.646 [2024-07-24 20:08:56.543800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.646 [2024-07-24 20:08:56.543829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.646 [2024-07-24 20:08:56.543839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.647 [2024-07-24 20:08:56.543846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.647 [2024-07-24 20:08:56.543868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.647 qpair failed and we were unable to recover it. 00:29:08.647 [2024-07-24 20:08:56.553611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.647 [2024-07-24 20:08:56.553749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.647 [2024-07-24 20:08:56.553790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.647 [2024-07-24 20:08:56.553802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.647 [2024-07-24 20:08:56.553810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.647 [2024-07-24 20:08:56.553839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.647 qpair failed and we were unable to recover it. 00:29:08.647 [2024-07-24 20:08:56.563630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.647 [2024-07-24 20:08:56.563758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.647 [2024-07-24 20:08:56.563799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.647 [2024-07-24 20:08:56.563810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.647 [2024-07-24 20:08:56.563818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.647 [2024-07-24 20:08:56.563846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.647 qpair failed and we were unable to recover it. 00:29:08.647 [2024-07-24 20:08:56.573668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.647 [2024-07-24 20:08:56.573786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.647 [2024-07-24 20:08:56.573816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.647 [2024-07-24 20:08:56.573825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.647 [2024-07-24 20:08:56.573832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.647 [2024-07-24 20:08:56.573856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.647 qpair failed and we were unable to recover it. 00:29:08.647 [2024-07-24 20:08:56.583653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.647 [2024-07-24 20:08:56.583770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.647 [2024-07-24 20:08:56.583811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.647 [2024-07-24 20:08:56.583822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.647 [2024-07-24 20:08:56.583831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.647 [2024-07-24 20:08:56.583859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.647 qpair failed and we were unable to recover it. 00:29:08.647 [2024-07-24 20:08:56.593616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.647 [2024-07-24 20:08:56.593732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.647 [2024-07-24 20:08:56.593765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.647 [2024-07-24 20:08:56.593774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.647 [2024-07-24 20:08:56.593781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.647 [2024-07-24 20:08:56.593808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.647 qpair failed and we were unable to recover it. 00:29:08.910 [2024-07-24 20:08:56.603724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.910 [2024-07-24 20:08:56.603833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.910 [2024-07-24 20:08:56.603862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.910 [2024-07-24 20:08:56.603872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.910 [2024-07-24 20:08:56.603880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.910 [2024-07-24 20:08:56.603902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.910 qpair failed and we were unable to recover it. 00:29:08.910 [2024-07-24 20:08:56.613782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.910 [2024-07-24 20:08:56.613897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.910 [2024-07-24 20:08:56.613938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.910 [2024-07-24 20:08:56.613949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.910 [2024-07-24 20:08:56.613957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.910 [2024-07-24 20:08:56.613987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.910 qpair failed and we were unable to recover it. 00:29:08.910 [2024-07-24 20:08:56.623796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.910 [2024-07-24 20:08:56.624027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.910 [2024-07-24 20:08:56.624079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.910 [2024-07-24 20:08:56.624092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.910 [2024-07-24 20:08:56.624101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.910 [2024-07-24 20:08:56.624129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.910 qpair failed and we were unable to recover it. 00:29:08.910 [2024-07-24 20:08:56.633829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.910 [2024-07-24 20:08:56.633951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.910 [2024-07-24 20:08:56.633983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.910 [2024-07-24 20:08:56.633993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.910 [2024-07-24 20:08:56.634000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.910 [2024-07-24 20:08:56.634023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.910 qpair failed and we were unable to recover it. 00:29:08.910 [2024-07-24 20:08:56.643866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.911 [2024-07-24 20:08:56.643986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.911 [2024-07-24 20:08:56.644027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.911 [2024-07-24 20:08:56.644038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.911 [2024-07-24 20:08:56.644045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.911 [2024-07-24 20:08:56.644073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.911 qpair failed and we were unable to recover it. 00:29:08.911 [2024-07-24 20:08:56.653862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.911 [2024-07-24 20:08:56.653976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.911 [2024-07-24 20:08:56.654007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.911 [2024-07-24 20:08:56.654016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.911 [2024-07-24 20:08:56.654024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.911 [2024-07-24 20:08:56.654047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.911 qpair failed and we were unable to recover it. 00:29:08.911 [2024-07-24 20:08:56.663949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.911 [2024-07-24 20:08:56.664074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.911 [2024-07-24 20:08:56.664105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.911 [2024-07-24 20:08:56.664114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.911 [2024-07-24 20:08:56.664121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.911 [2024-07-24 20:08:56.664152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.911 qpair failed and we were unable to recover it. 00:29:08.911 [2024-07-24 20:08:56.673908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.911 [2024-07-24 20:08:56.674018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.911 [2024-07-24 20:08:56.674048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.911 [2024-07-24 20:08:56.674058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.911 [2024-07-24 20:08:56.674065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.911 [2024-07-24 20:08:56.674087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.911 qpair failed and we were unable to recover it. 00:29:08.911 [2024-07-24 20:08:56.683998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.911 [2024-07-24 20:08:56.684121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.911 [2024-07-24 20:08:56.684152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.911 [2024-07-24 20:08:56.684161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.911 [2024-07-24 20:08:56.684168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.911 [2024-07-24 20:08:56.684190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.911 qpair failed and we were unable to recover it. 00:29:08.911 [2024-07-24 20:08:56.694018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.911 [2024-07-24 20:08:56.694125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.911 [2024-07-24 20:08:56.694153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.911 [2024-07-24 20:08:56.694162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.911 [2024-07-24 20:08:56.694169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.911 [2024-07-24 20:08:56.694190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.911 qpair failed and we were unable to recover it. 00:29:08.911 [2024-07-24 20:08:56.704083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.911 [2024-07-24 20:08:56.704190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.911 [2024-07-24 20:08:56.704227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.911 [2024-07-24 20:08:56.704236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.911 [2024-07-24 20:08:56.704244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.911 [2024-07-24 20:08:56.704266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.911 qpair failed and we were unable to recover it. 00:29:08.911 [2024-07-24 20:08:56.714069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.911 [2024-07-24 20:08:56.714291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.911 [2024-07-24 20:08:56.714328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.911 [2024-07-24 20:08:56.714337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.911 [2024-07-24 20:08:56.714343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.911 [2024-07-24 20:08:56.714365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.911 qpair failed and we were unable to recover it. 00:29:08.911 [2024-07-24 20:08:56.724100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.911 [2024-07-24 20:08:56.724219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.911 [2024-07-24 20:08:56.724250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.911 [2024-07-24 20:08:56.724258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.911 [2024-07-24 20:08:56.724266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.911 [2024-07-24 20:08:56.724288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.911 qpair failed and we were unable to recover it. 00:29:08.911 [2024-07-24 20:08:56.734119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.911 [2024-07-24 20:08:56.734237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.911 [2024-07-24 20:08:56.734265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.911 [2024-07-24 20:08:56.734275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.911 [2024-07-24 20:08:56.734282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.911 [2024-07-24 20:08:56.734303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.911 qpair failed and we were unable to recover it. 00:29:08.911 [2024-07-24 20:08:56.744104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.911 [2024-07-24 20:08:56.744220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.911 [2024-07-24 20:08:56.744250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.911 [2024-07-24 20:08:56.744259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.911 [2024-07-24 20:08:56.744266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.911 [2024-07-24 20:08:56.744287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.911 qpair failed and we were unable to recover it. 00:29:08.911 [2024-07-24 20:08:56.754227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.911 [2024-07-24 20:08:56.754351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.911 [2024-07-24 20:08:56.754379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.911 [2024-07-24 20:08:56.754389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.911 [2024-07-24 20:08:56.754396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.911 [2024-07-24 20:08:56.754424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.911 qpair failed and we were unable to recover it. 00:29:08.911 [2024-07-24 20:08:56.764227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.912 [2024-07-24 20:08:56.764336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.912 [2024-07-24 20:08:56.764365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.912 [2024-07-24 20:08:56.764374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.912 [2024-07-24 20:08:56.764381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.912 [2024-07-24 20:08:56.764402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.912 qpair failed and we were unable to recover it. 00:29:08.912 [2024-07-24 20:08:56.774272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.912 [2024-07-24 20:08:56.774377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.912 [2024-07-24 20:08:56.774405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.912 [2024-07-24 20:08:56.774414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.912 [2024-07-24 20:08:56.774422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.912 [2024-07-24 20:08:56.774444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.912 qpair failed and we were unable to recover it. 00:29:08.912 [2024-07-24 20:08:56.784307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.912 [2024-07-24 20:08:56.784417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.912 [2024-07-24 20:08:56.784447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.912 [2024-07-24 20:08:56.784456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.912 [2024-07-24 20:08:56.784464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.912 [2024-07-24 20:08:56.784486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.912 qpair failed and we were unable to recover it. 00:29:08.912 [2024-07-24 20:08:56.794348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.912 [2024-07-24 20:08:56.794465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.912 [2024-07-24 20:08:56.794493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.912 [2024-07-24 20:08:56.794503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.912 [2024-07-24 20:08:56.794510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.912 [2024-07-24 20:08:56.794531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.912 qpair failed and we were unable to recover it. 00:29:08.912 [2024-07-24 20:08:56.804404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.912 [2024-07-24 20:08:56.804518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.912 [2024-07-24 20:08:56.804546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.912 [2024-07-24 20:08:56.804555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.912 [2024-07-24 20:08:56.804563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.912 [2024-07-24 20:08:56.804585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.912 qpair failed and we were unable to recover it. 00:29:08.912 [2024-07-24 20:08:56.814400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.912 [2024-07-24 20:08:56.814624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.912 [2024-07-24 20:08:56.814652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.912 [2024-07-24 20:08:56.814661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.912 [2024-07-24 20:08:56.814668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.912 [2024-07-24 20:08:56.814689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.912 qpair failed and we were unable to recover it. 00:29:08.912 [2024-07-24 20:08:56.824425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.912 [2024-07-24 20:08:56.824533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.912 [2024-07-24 20:08:56.824562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.912 [2024-07-24 20:08:56.824571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.912 [2024-07-24 20:08:56.824578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.912 [2024-07-24 20:08:56.824599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.912 qpair failed and we were unable to recover it. 00:29:08.912 [2024-07-24 20:08:56.834461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.912 [2024-07-24 20:08:56.834575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.912 [2024-07-24 20:08:56.834603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.912 [2024-07-24 20:08:56.834612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.912 [2024-07-24 20:08:56.834620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.912 [2024-07-24 20:08:56.834641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.912 qpair failed and we were unable to recover it. 00:29:08.912 [2024-07-24 20:08:56.844495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.912 [2024-07-24 20:08:56.844611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.912 [2024-07-24 20:08:56.844638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.912 [2024-07-24 20:08:56.844647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.912 [2024-07-24 20:08:56.844666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.912 [2024-07-24 20:08:56.844688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.912 qpair failed and we were unable to recover it. 00:29:08.912 [2024-07-24 20:08:56.854531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.912 [2024-07-24 20:08:56.854640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.912 [2024-07-24 20:08:56.854669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.912 [2024-07-24 20:08:56.854678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.912 [2024-07-24 20:08:56.854685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:08.912 [2024-07-24 20:08:56.854707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:08.912 qpair failed and we were unable to recover it. 00:29:09.176 [2024-07-24 20:08:56.864588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.176 [2024-07-24 20:08:56.864727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.176 [2024-07-24 20:08:56.864749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.176 [2024-07-24 20:08:56.864758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.176 [2024-07-24 20:08:56.864765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.176 [2024-07-24 20:08:56.864784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-07-24 20:08:56.874619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.176 [2024-07-24 20:08:56.874758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.176 [2024-07-24 20:08:56.874786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.176 [2024-07-24 20:08:56.874795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.176 [2024-07-24 20:08:56.874802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.176 [2024-07-24 20:08:56.874824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-07-24 20:08:56.884608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.176 [2024-07-24 20:08:56.884730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.177 [2024-07-24 20:08:56.884771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.177 [2024-07-24 20:08:56.884782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.177 [2024-07-24 20:08:56.884790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.177 [2024-07-24 20:08:56.884818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-07-24 20:08:56.894633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.177 [2024-07-24 20:08:56.894745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.177 [2024-07-24 20:08:56.894787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.177 [2024-07-24 20:08:56.894798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.177 [2024-07-24 20:08:56.894805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.177 [2024-07-24 20:08:56.894833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-07-24 20:08:56.904686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.177 [2024-07-24 20:08:56.904792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.177 [2024-07-24 20:08:56.904824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.177 [2024-07-24 20:08:56.904833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.177 [2024-07-24 20:08:56.904840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.177 [2024-07-24 20:08:56.904863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-07-24 20:08:56.914602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.177 [2024-07-24 20:08:56.914723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.177 [2024-07-24 20:08:56.914752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.177 [2024-07-24 20:08:56.914761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.177 [2024-07-24 20:08:56.914768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.177 [2024-07-24 20:08:56.914790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-07-24 20:08:56.924786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.177 [2024-07-24 20:08:56.924906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.177 [2024-07-24 20:08:56.924933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.177 [2024-07-24 20:08:56.924942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.177 [2024-07-24 20:08:56.924949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.177 [2024-07-24 20:08:56.924971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-07-24 20:08:56.934730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.177 [2024-07-24 20:08:56.934836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.177 [2024-07-24 20:08:56.934864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.177 [2024-07-24 20:08:56.934881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.177 [2024-07-24 20:08:56.934889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.177 [2024-07-24 20:08:56.934912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-07-24 20:08:56.944808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.177 [2024-07-24 20:08:56.944923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.177 [2024-07-24 20:08:56.944952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.177 [2024-07-24 20:08:56.944962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.177 [2024-07-24 20:08:56.944969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.177 [2024-07-24 20:08:56.944990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-07-24 20:08:56.954821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.177 [2024-07-24 20:08:56.954944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.177 [2024-07-24 20:08:56.954972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.177 [2024-07-24 20:08:56.954981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.177 [2024-07-24 20:08:56.954988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.177 [2024-07-24 20:08:56.955010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-07-24 20:08:56.964891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.177 [2024-07-24 20:08:56.965007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.177 [2024-07-24 20:08:56.965037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.177 [2024-07-24 20:08:56.965047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.177 [2024-07-24 20:08:56.965054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.177 [2024-07-24 20:08:56.965076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-07-24 20:08:56.974896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.177 [2024-07-24 20:08:56.975009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.177 [2024-07-24 20:08:56.975038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.177 [2024-07-24 20:08:56.975046] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.177 [2024-07-24 20:08:56.975054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.177 [2024-07-24 20:08:56.975075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-07-24 20:08:56.984813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.177 [2024-07-24 20:08:56.984922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.177 [2024-07-24 20:08:56.984952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.177 [2024-07-24 20:08:56.984962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.177 [2024-07-24 20:08:56.984969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.177 [2024-07-24 20:08:56.984991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-07-24 20:08:56.994881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.177 [2024-07-24 20:08:56.995000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.177 [2024-07-24 20:08:56.995028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.177 [2024-07-24 20:08:56.995037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.177 [2024-07-24 20:08:56.995044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.177 [2024-07-24 20:08:56.995065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-07-24 20:08:57.004985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.177 [2024-07-24 20:08:57.005094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.178 [2024-07-24 20:08:57.005123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.178 [2024-07-24 20:08:57.005132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.178 [2024-07-24 20:08:57.005139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.178 [2024-07-24 20:08:57.005160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-07-24 20:08:57.015044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.178 [2024-07-24 20:08:57.015208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.178 [2024-07-24 20:08:57.015237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.178 [2024-07-24 20:08:57.015246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.178 [2024-07-24 20:08:57.015253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.178 [2024-07-24 20:08:57.015274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-07-24 20:08:57.025043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.178 [2024-07-24 20:08:57.025156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.178 [2024-07-24 20:08:57.025192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.178 [2024-07-24 20:08:57.025208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.178 [2024-07-24 20:08:57.025216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.178 [2024-07-24 20:08:57.025238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-07-24 20:08:57.035130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.178 [2024-07-24 20:08:57.035269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.178 [2024-07-24 20:08:57.035297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.178 [2024-07-24 20:08:57.035306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.178 [2024-07-24 20:08:57.035313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.178 [2024-07-24 20:08:57.035335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-07-24 20:08:57.045112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.178 [2024-07-24 20:08:57.045238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.178 [2024-07-24 20:08:57.045267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.178 [2024-07-24 20:08:57.045276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.178 [2024-07-24 20:08:57.045283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.178 [2024-07-24 20:08:57.045305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-07-24 20:08:57.055177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.178 [2024-07-24 20:08:57.055313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.178 [2024-07-24 20:08:57.055342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.178 [2024-07-24 20:08:57.055351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.178 [2024-07-24 20:08:57.055358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.178 [2024-07-24 20:08:57.055380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-07-24 20:08:57.065058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.178 [2024-07-24 20:08:57.065173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.178 [2024-07-24 20:08:57.065208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.178 [2024-07-24 20:08:57.065218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.178 [2024-07-24 20:08:57.065226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.178 [2024-07-24 20:08:57.065254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-07-24 20:08:57.075212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.178 [2024-07-24 20:08:57.075334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.178 [2024-07-24 20:08:57.075362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.178 [2024-07-24 20:08:57.075371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.178 [2024-07-24 20:08:57.075379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.178 [2024-07-24 20:08:57.075401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-07-24 20:08:57.085247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.178 [2024-07-24 20:08:57.085366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.178 [2024-07-24 20:08:57.085395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.178 [2024-07-24 20:08:57.085405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.178 [2024-07-24 20:08:57.085412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.178 [2024-07-24 20:08:57.085433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-07-24 20:08:57.095270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.178 [2024-07-24 20:08:57.095390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.178 [2024-07-24 20:08:57.095418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.178 [2024-07-24 20:08:57.095429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.178 [2024-07-24 20:08:57.095436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.178 [2024-07-24 20:08:57.095459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-07-24 20:08:57.105296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.178 [2024-07-24 20:08:57.105413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.178 [2024-07-24 20:08:57.105441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.178 [2024-07-24 20:08:57.105450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.178 [2024-07-24 20:08:57.105458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.178 [2024-07-24 20:08:57.105479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-07-24 20:08:57.115314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.179 [2024-07-24 20:08:57.115433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.179 [2024-07-24 20:08:57.115468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.179 [2024-07-24 20:08:57.115477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.179 [2024-07-24 20:08:57.115484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.179 [2024-07-24 20:08:57.115505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.179 qpair failed and we were unable to recover it. 00:29:09.179 [2024-07-24 20:08:57.125338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.179 [2024-07-24 20:08:57.125457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.179 [2024-07-24 20:08:57.125485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.179 [2024-07-24 20:08:57.125495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.179 [2024-07-24 20:08:57.125502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.179 [2024-07-24 20:08:57.125524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.179 qpair failed and we were unable to recover it. 00:29:09.441 [2024-07-24 20:08:57.135283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.442 [2024-07-24 20:08:57.135398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.442 [2024-07-24 20:08:57.135425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.442 [2024-07-24 20:08:57.135434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.442 [2024-07-24 20:08:57.135442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.442 [2024-07-24 20:08:57.135463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-24 20:08:57.145504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.442 [2024-07-24 20:08:57.145610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.442 [2024-07-24 20:08:57.145639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.442 [2024-07-24 20:08:57.145648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.442 [2024-07-24 20:08:57.145655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.442 [2024-07-24 20:08:57.145676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-24 20:08:57.155371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.442 [2024-07-24 20:08:57.155479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.442 [2024-07-24 20:08:57.155506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.442 [2024-07-24 20:08:57.155515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.442 [2024-07-24 20:08:57.155522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.442 [2024-07-24 20:08:57.155551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-24 20:08:57.165455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.442 [2024-07-24 20:08:57.165561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.442 [2024-07-24 20:08:57.165591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.442 [2024-07-24 20:08:57.165599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.442 [2024-07-24 20:08:57.165607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.442 [2024-07-24 20:08:57.165628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-24 20:08:57.175474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.442 [2024-07-24 20:08:57.175579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.442 [2024-07-24 20:08:57.175606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.442 [2024-07-24 20:08:57.175614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.442 [2024-07-24 20:08:57.175621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.442 [2024-07-24 20:08:57.175643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-24 20:08:57.185538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.442 [2024-07-24 20:08:57.185642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.442 [2024-07-24 20:08:57.185670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.442 [2024-07-24 20:08:57.185679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.442 [2024-07-24 20:08:57.185687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.442 [2024-07-24 20:08:57.185709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-24 20:08:57.195541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.442 [2024-07-24 20:08:57.195659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.442 [2024-07-24 20:08:57.195686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.442 [2024-07-24 20:08:57.195696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.442 [2024-07-24 20:08:57.195704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.442 [2024-07-24 20:08:57.195726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-24 20:08:57.205570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.442 [2024-07-24 20:08:57.205679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.442 [2024-07-24 20:08:57.205714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.442 [2024-07-24 20:08:57.205722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.442 [2024-07-24 20:08:57.205729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.442 [2024-07-24 20:08:57.205750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-24 20:08:57.215563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.442 [2024-07-24 20:08:57.215672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.442 [2024-07-24 20:08:57.215700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.442 [2024-07-24 20:08:57.215709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.442 [2024-07-24 20:08:57.215716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.442 [2024-07-24 20:08:57.215737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-24 20:08:57.225624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.442 [2024-07-24 20:08:57.225729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.442 [2024-07-24 20:08:57.225760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.442 [2024-07-24 20:08:57.225769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.442 [2024-07-24 20:08:57.225778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.442 [2024-07-24 20:08:57.225801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-24 20:08:57.235690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.442 [2024-07-24 20:08:57.235822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.442 [2024-07-24 20:08:57.235863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.442 [2024-07-24 20:08:57.235874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.442 [2024-07-24 20:08:57.235881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.442 [2024-07-24 20:08:57.235909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-24 20:08:57.245705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.442 [2024-07-24 20:08:57.245821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.442 [2024-07-24 20:08:57.245863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.442 [2024-07-24 20:08:57.245874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.442 [2024-07-24 20:08:57.245889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.442 [2024-07-24 20:08:57.245917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-24 20:08:57.255717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.443 [2024-07-24 20:08:57.255832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.443 [2024-07-24 20:08:57.255874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.443 [2024-07-24 20:08:57.255885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.443 [2024-07-24 20:08:57.255892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.443 [2024-07-24 20:08:57.255920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.443 qpair failed and we were unable to recover it. 00:29:09.443 [2024-07-24 20:08:57.265686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.443 [2024-07-24 20:08:57.265794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.443 [2024-07-24 20:08:57.265826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.443 [2024-07-24 20:08:57.265835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.443 [2024-07-24 20:08:57.265842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.443 [2024-07-24 20:08:57.265867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.443 qpair failed and we were unable to recover it. 00:29:09.443 [2024-07-24 20:08:57.275786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.443 [2024-07-24 20:08:57.275906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.443 [2024-07-24 20:08:57.275935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.443 [2024-07-24 20:08:57.275945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.443 [2024-07-24 20:08:57.275953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.443 [2024-07-24 20:08:57.275975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.443 qpair failed and we were unable to recover it. 00:29:09.443 [2024-07-24 20:08:57.285704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.443 [2024-07-24 20:08:57.285815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.443 [2024-07-24 20:08:57.285844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.443 [2024-07-24 20:08:57.285853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.443 [2024-07-24 20:08:57.285859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.443 [2024-07-24 20:08:57.285881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.443 qpair failed and we were unable to recover it. 00:29:09.443 [2024-07-24 20:08:57.295850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.443 [2024-07-24 20:08:57.295956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.443 [2024-07-24 20:08:57.295985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.443 [2024-07-24 20:08:57.295993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.443 [2024-07-24 20:08:57.296000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.443 [2024-07-24 20:08:57.296022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.443 qpair failed and we were unable to recover it. 00:29:09.443 [2024-07-24 20:08:57.305839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.443 [2024-07-24 20:08:57.305947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.443 [2024-07-24 20:08:57.305975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.443 [2024-07-24 20:08:57.305984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.443 [2024-07-24 20:08:57.305990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.443 [2024-07-24 20:08:57.306013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.443 qpair failed and we were unable to recover it. 00:29:09.443 [2024-07-24 20:08:57.315852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.443 [2024-07-24 20:08:57.315968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.443 [2024-07-24 20:08:57.315996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.443 [2024-07-24 20:08:57.316005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.443 [2024-07-24 20:08:57.316014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.443 [2024-07-24 20:08:57.316036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.443 qpair failed and we were unable to recover it. 00:29:09.443 [2024-07-24 20:08:57.325975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.443 [2024-07-24 20:08:57.326095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.443 [2024-07-24 20:08:57.326136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.443 [2024-07-24 20:08:57.326147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.443 [2024-07-24 20:08:57.326155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.443 [2024-07-24 20:08:57.326183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.443 qpair failed and we were unable to recover it. 00:29:09.443 [2024-07-24 20:08:57.335968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.443 [2024-07-24 20:08:57.336078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.443 [2024-07-24 20:08:57.336109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.443 [2024-07-24 20:08:57.336125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.443 [2024-07-24 20:08:57.336134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.443 [2024-07-24 20:08:57.336158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.443 qpair failed and we were unable to recover it. 00:29:09.443 [2024-07-24 20:08:57.346021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.443 [2024-07-24 20:08:57.346129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.443 [2024-07-24 20:08:57.346157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.443 [2024-07-24 20:08:57.346166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.443 [2024-07-24 20:08:57.346173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.443 [2024-07-24 20:08:57.346194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.443 qpair failed and we were unable to recover it. 00:29:09.443 [2024-07-24 20:08:57.356056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.443 [2024-07-24 20:08:57.356169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.443 [2024-07-24 20:08:57.356198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.443 [2024-07-24 20:08:57.356214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.443 [2024-07-24 20:08:57.356221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.443 [2024-07-24 20:08:57.356242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.443 qpair failed and we were unable to recover it. 00:29:09.443 [2024-07-24 20:08:57.365981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.443 [2024-07-24 20:08:57.366089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.443 [2024-07-24 20:08:57.366117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.443 [2024-07-24 20:08:57.366127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.443 [2024-07-24 20:08:57.366135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.443 [2024-07-24 20:08:57.366156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.443 qpair failed and we were unable to recover it. 00:29:09.444 [2024-07-24 20:08:57.376104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.444 [2024-07-24 20:08:57.376212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.444 [2024-07-24 20:08:57.376241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.444 [2024-07-24 20:08:57.376250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.444 [2024-07-24 20:08:57.376257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.444 [2024-07-24 20:08:57.376281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.444 qpair failed and we were unable to recover it. 00:29:09.444 [2024-07-24 20:08:57.386108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.444 [2024-07-24 20:08:57.386333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.444 [2024-07-24 20:08:57.386363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.444 [2024-07-24 20:08:57.386372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.444 [2024-07-24 20:08:57.386379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.444 [2024-07-24 20:08:57.386401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.444 qpair failed and we were unable to recover it. 00:29:09.707 [2024-07-24 20:08:57.396185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.707 [2024-07-24 20:08:57.396303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.707 [2024-07-24 20:08:57.396332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.707 [2024-07-24 20:08:57.396342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.707 [2024-07-24 20:08:57.396349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.707 [2024-07-24 20:08:57.396371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.707 qpair failed and we were unable to recover it. 00:29:09.707 [2024-07-24 20:08:57.406080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.707 [2024-07-24 20:08:57.406188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.707 [2024-07-24 20:08:57.406221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.707 [2024-07-24 20:08:57.406229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.707 [2024-07-24 20:08:57.406236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.707 [2024-07-24 20:08:57.406260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.707 qpair failed and we were unable to recover it. 00:29:09.707 [2024-07-24 20:08:57.416247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.707 [2024-07-24 20:08:57.416357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.707 [2024-07-24 20:08:57.416386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.707 [2024-07-24 20:08:57.416395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.707 [2024-07-24 20:08:57.416401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.707 [2024-07-24 20:08:57.416423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.707 qpair failed and we were unable to recover it. 00:29:09.707 [2024-07-24 20:08:57.426188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.707 [2024-07-24 20:08:57.426302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.707 [2024-07-24 20:08:57.426331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.707 [2024-07-24 20:08:57.426347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.707 [2024-07-24 20:08:57.426354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.707 [2024-07-24 20:08:57.426377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.707 qpair failed and we were unable to recover it. 00:29:09.707 [2024-07-24 20:08:57.436315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.707 [2024-07-24 20:08:57.436433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.707 [2024-07-24 20:08:57.436463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.707 [2024-07-24 20:08:57.436471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.708 [2024-07-24 20:08:57.436478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.708 [2024-07-24 20:08:57.436500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-07-24 20:08:57.446311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.708 [2024-07-24 20:08:57.446461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.708 [2024-07-24 20:08:57.446492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.708 [2024-07-24 20:08:57.446505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.708 [2024-07-24 20:08:57.446512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.708 [2024-07-24 20:08:57.446535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-07-24 20:08:57.456257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.708 [2024-07-24 20:08:57.456367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.708 [2024-07-24 20:08:57.456397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.708 [2024-07-24 20:08:57.456406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.708 [2024-07-24 20:08:57.456413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.708 [2024-07-24 20:08:57.456435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-07-24 20:08:57.466375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.708 [2024-07-24 20:08:57.466483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.708 [2024-07-24 20:08:57.466510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.708 [2024-07-24 20:08:57.466519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.708 [2024-07-24 20:08:57.466528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.708 [2024-07-24 20:08:57.466549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-07-24 20:08:57.476435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.708 [2024-07-24 20:08:57.476552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.708 [2024-07-24 20:08:57.476580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.708 [2024-07-24 20:08:57.476590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.708 [2024-07-24 20:08:57.476597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.708 [2024-07-24 20:08:57.476618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-07-24 20:08:57.486447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.708 [2024-07-24 20:08:57.486555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.708 [2024-07-24 20:08:57.486583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.708 [2024-07-24 20:08:57.486593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.708 [2024-07-24 20:08:57.486600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.708 [2024-07-24 20:08:57.486621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-07-24 20:08:57.496455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.708 [2024-07-24 20:08:57.496684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.708 [2024-07-24 20:08:57.496712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.708 [2024-07-24 20:08:57.496720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.708 [2024-07-24 20:08:57.496727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.708 [2024-07-24 20:08:57.496747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-07-24 20:08:57.506540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.708 [2024-07-24 20:08:57.506651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.708 [2024-07-24 20:08:57.506682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.708 [2024-07-24 20:08:57.506692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.708 [2024-07-24 20:08:57.506700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.708 [2024-07-24 20:08:57.506724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-07-24 20:08:57.516559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.708 [2024-07-24 20:08:57.516683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.708 [2024-07-24 20:08:57.516723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.708 [2024-07-24 20:08:57.516733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.708 [2024-07-24 20:08:57.516740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.708 [2024-07-24 20:08:57.516763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-07-24 20:08:57.526561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.708 [2024-07-24 20:08:57.526664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.708 [2024-07-24 20:08:57.526694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.708 [2024-07-24 20:08:57.526704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.708 [2024-07-24 20:08:57.526711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.708 [2024-07-24 20:08:57.526733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-07-24 20:08:57.536639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.708 [2024-07-24 20:08:57.536743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.708 [2024-07-24 20:08:57.536772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.708 [2024-07-24 20:08:57.536780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.708 [2024-07-24 20:08:57.536787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.708 [2024-07-24 20:08:57.536809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-07-24 20:08:57.546702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.708 [2024-07-24 20:08:57.546842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.708 [2024-07-24 20:08:57.546872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.708 [2024-07-24 20:08:57.546884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.708 [2024-07-24 20:08:57.546891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.708 [2024-07-24 20:08:57.546914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.708 qpair failed and we were unable to recover it. 00:29:09.708 [2024-07-24 20:08:57.556651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.708 [2024-07-24 20:08:57.556765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.708 [2024-07-24 20:08:57.556797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.709 [2024-07-24 20:08:57.556805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.709 [2024-07-24 20:08:57.556812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.709 [2024-07-24 20:08:57.556848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.709 [2024-07-24 20:08:57.566717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.709 [2024-07-24 20:08:57.566832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.709 [2024-07-24 20:08:57.566855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.709 [2024-07-24 20:08:57.566863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.709 [2024-07-24 20:08:57.566871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.709 [2024-07-24 20:08:57.566892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.709 [2024-07-24 20:08:57.576726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.709 [2024-07-24 20:08:57.576874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.709 [2024-07-24 20:08:57.576913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.709 [2024-07-24 20:08:57.576923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.709 [2024-07-24 20:08:57.576930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.709 [2024-07-24 20:08:57.576957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.709 [2024-07-24 20:08:57.586739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.709 [2024-07-24 20:08:57.586846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.709 [2024-07-24 20:08:57.586878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.709 [2024-07-24 20:08:57.586887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.709 [2024-07-24 20:08:57.586894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.709 [2024-07-24 20:08:57.586917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.709 [2024-07-24 20:08:57.596806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.709 [2024-07-24 20:08:57.596937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.709 [2024-07-24 20:08:57.596978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.709 [2024-07-24 20:08:57.596991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.709 [2024-07-24 20:08:57.597000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.709 [2024-07-24 20:08:57.597027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.709 [2024-07-24 20:08:57.606824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.709 [2024-07-24 20:08:57.606936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.709 [2024-07-24 20:08:57.606975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.709 [2024-07-24 20:08:57.606985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.709 [2024-07-24 20:08:57.606993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.709 [2024-07-24 20:08:57.607017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.709 [2024-07-24 20:08:57.616760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.709 [2024-07-24 20:08:57.616864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.709 [2024-07-24 20:08:57.616895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.709 [2024-07-24 20:08:57.616905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.709 [2024-07-24 20:08:57.616913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.709 [2024-07-24 20:08:57.616935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.709 [2024-07-24 20:08:57.627014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.709 [2024-07-24 20:08:57.627123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.709 [2024-07-24 20:08:57.627153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.709 [2024-07-24 20:08:57.627162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.709 [2024-07-24 20:08:57.627170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.709 [2024-07-24 20:08:57.627192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.709 [2024-07-24 20:08:57.636940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.709 [2024-07-24 20:08:57.637091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.709 [2024-07-24 20:08:57.637125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.709 [2024-07-24 20:08:57.637134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.709 [2024-07-24 20:08:57.637142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.709 [2024-07-24 20:08:57.637166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.709 [2024-07-24 20:08:57.646964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.709 [2024-07-24 20:08:57.647076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.709 [2024-07-24 20:08:57.647105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.709 [2024-07-24 20:08:57.647114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.709 [2024-07-24 20:08:57.647131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.709 [2024-07-24 20:08:57.647153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.709 [2024-07-24 20:08:57.656996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.709 [2024-07-24 20:08:57.657103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.709 [2024-07-24 20:08:57.657133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.709 [2024-07-24 20:08:57.657142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.709 [2024-07-24 20:08:57.657150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.709 [2024-07-24 20:08:57.657171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.709 qpair failed and we were unable to recover it. 00:29:09.973 [2024-07-24 20:08:57.667029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.973 [2024-07-24 20:08:57.667142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.973 [2024-07-24 20:08:57.667171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.973 [2024-07-24 20:08:57.667180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.973 [2024-07-24 20:08:57.667187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.973 [2024-07-24 20:08:57.667214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.973 qpair failed and we were unable to recover it. 00:29:09.973 [2024-07-24 20:08:57.677061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.973 [2024-07-24 20:08:57.677175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.973 [2024-07-24 20:08:57.677212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.973 [2024-07-24 20:08:57.677222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.973 [2024-07-24 20:08:57.677229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.973 [2024-07-24 20:08:57.677250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.973 qpair failed and we were unable to recover it. 00:29:09.973 [2024-07-24 20:08:57.687074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.973 [2024-07-24 20:08:57.687185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.973 [2024-07-24 20:08:57.687220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.973 [2024-07-24 20:08:57.687230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.973 [2024-07-24 20:08:57.687237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.973 [2024-07-24 20:08:57.687259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.973 qpair failed and we were unable to recover it. 00:29:09.973 [2024-07-24 20:08:57.697101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.973 [2024-07-24 20:08:57.697213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.973 [2024-07-24 20:08:57.697243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.973 [2024-07-24 20:08:57.697252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.973 [2024-07-24 20:08:57.697260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.973 [2024-07-24 20:08:57.697282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.973 qpair failed and we were unable to recover it. 00:29:09.973 [2024-07-24 20:08:57.707039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.973 [2024-07-24 20:08:57.707148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.973 [2024-07-24 20:08:57.707176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.973 [2024-07-24 20:08:57.707187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.973 [2024-07-24 20:08:57.707194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.973 [2024-07-24 20:08:57.707221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.973 qpair failed and we were unable to recover it. 00:29:09.973 [2024-07-24 20:08:57.717079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.973 [2024-07-24 20:08:57.717195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.973 [2024-07-24 20:08:57.717234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.973 [2024-07-24 20:08:57.717243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.973 [2024-07-24 20:08:57.717250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.973 [2024-07-24 20:08:57.717273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.973 qpair failed and we were unable to recover it. 00:29:09.973 [2024-07-24 20:08:57.727229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.973 [2024-07-24 20:08:57.727334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.973 [2024-07-24 20:08:57.727364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.973 [2024-07-24 20:08:57.727372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.973 [2024-07-24 20:08:57.727379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.973 [2024-07-24 20:08:57.727400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.973 qpair failed and we were unable to recover it. 00:29:09.974 [2024-07-24 20:08:57.737237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.974 [2024-07-24 20:08:57.737345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.974 [2024-07-24 20:08:57.737373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.974 [2024-07-24 20:08:57.737390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.974 [2024-07-24 20:08:57.737397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.974 [2024-07-24 20:08:57.737420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.974 qpair failed and we were unable to recover it. 00:29:09.974 [2024-07-24 20:08:57.747268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.974 [2024-07-24 20:08:57.747391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.974 [2024-07-24 20:08:57.747420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.974 [2024-07-24 20:08:57.747429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.974 [2024-07-24 20:08:57.747436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.974 [2024-07-24 20:08:57.747457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.974 qpair failed and we were unable to recover it. 00:29:09.974 [2024-07-24 20:08:57.757288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.974 [2024-07-24 20:08:57.757404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.974 [2024-07-24 20:08:57.757432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.974 [2024-07-24 20:08:57.757441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.974 [2024-07-24 20:08:57.757450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.974 [2024-07-24 20:08:57.757471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.974 qpair failed and we were unable to recover it. 00:29:09.974 [2024-07-24 20:08:57.767670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.974 [2024-07-24 20:08:57.767835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.974 [2024-07-24 20:08:57.767864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.974 [2024-07-24 20:08:57.767872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.974 [2024-07-24 20:08:57.767879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.974 [2024-07-24 20:08:57.767901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.974 qpair failed and we were unable to recover it. 00:29:09.974 [2024-07-24 20:08:57.777459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.974 [2024-07-24 20:08:57.777562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.974 [2024-07-24 20:08:57.777590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.974 [2024-07-24 20:08:57.777599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.974 [2024-07-24 20:08:57.777607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.974 [2024-07-24 20:08:57.777629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.974 qpair failed and we were unable to recover it. 00:29:09.974 [2024-07-24 20:08:57.787444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.974 [2024-07-24 20:08:57.787557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.974 [2024-07-24 20:08:57.787587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.974 [2024-07-24 20:08:57.787596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.974 [2024-07-24 20:08:57.787603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.974 [2024-07-24 20:08:57.787624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.974 qpair failed and we were unable to recover it. 00:29:09.974 [2024-07-24 20:08:57.797499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.974 [2024-07-24 20:08:57.797619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.974 [2024-07-24 20:08:57.797648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.974 [2024-07-24 20:08:57.797657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.974 [2024-07-24 20:08:57.797664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.974 [2024-07-24 20:08:57.797685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.974 qpair failed and we were unable to recover it. 00:29:09.974 [2024-07-24 20:08:57.807505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.974 [2024-07-24 20:08:57.807610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.974 [2024-07-24 20:08:57.807638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.974 [2024-07-24 20:08:57.807648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.974 [2024-07-24 20:08:57.807655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.974 [2024-07-24 20:08:57.807675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.974 qpair failed and we were unable to recover it. 00:29:09.974 [2024-07-24 20:08:57.817404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.974 [2024-07-24 20:08:57.817516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.974 [2024-07-24 20:08:57.817544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.974 [2024-07-24 20:08:57.817553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.974 [2024-07-24 20:08:57.817560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.974 [2024-07-24 20:08:57.817580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.974 qpair failed and we were unable to recover it. 00:29:09.974 [2024-07-24 20:08:57.827533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.974 [2024-07-24 20:08:57.827670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.974 [2024-07-24 20:08:57.827698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.974 [2024-07-24 20:08:57.827714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.974 [2024-07-24 20:08:57.827721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.974 [2024-07-24 20:08:57.827742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.974 qpair failed and we were unable to recover it. 00:29:09.974 [2024-07-24 20:08:57.837554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.974 [2024-07-24 20:08:57.837671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.974 [2024-07-24 20:08:57.837704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.974 [2024-07-24 20:08:57.837713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.974 [2024-07-24 20:08:57.837721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.974 [2024-07-24 20:08:57.837744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.974 qpair failed and we were unable to recover it. 00:29:09.974 [2024-07-24 20:08:57.847607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.975 [2024-07-24 20:08:57.847713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.975 [2024-07-24 20:08:57.847742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.975 [2024-07-24 20:08:57.847751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.975 [2024-07-24 20:08:57.847759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.975 [2024-07-24 20:08:57.847781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.975 qpair failed and we were unable to recover it. 00:29:09.975 [2024-07-24 20:08:57.857631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.975 [2024-07-24 20:08:57.857741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.975 [2024-07-24 20:08:57.857782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.975 [2024-07-24 20:08:57.857793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.975 [2024-07-24 20:08:57.857801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.975 [2024-07-24 20:08:57.857829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.975 qpair failed and we were unable to recover it. 00:29:09.975 [2024-07-24 20:08:57.867664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.975 [2024-07-24 20:08:57.867773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.975 [2024-07-24 20:08:57.867807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.975 [2024-07-24 20:08:57.867818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.975 [2024-07-24 20:08:57.867825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.975 [2024-07-24 20:08:57.867848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.975 qpair failed and we were unable to recover it. 00:29:09.975 [2024-07-24 20:08:57.877674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.975 [2024-07-24 20:08:57.877786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.975 [2024-07-24 20:08:57.877815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.975 [2024-07-24 20:08:57.877824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.975 [2024-07-24 20:08:57.877834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.975 [2024-07-24 20:08:57.877855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.975 qpair failed and we were unable to recover it. 00:29:09.975 [2024-07-24 20:08:57.887764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.975 [2024-07-24 20:08:57.887868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.975 [2024-07-24 20:08:57.887897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.975 [2024-07-24 20:08:57.887906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.975 [2024-07-24 20:08:57.887913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.975 [2024-07-24 20:08:57.887936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.975 qpair failed and we were unable to recover it. 00:29:09.975 [2024-07-24 20:08:57.897757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.975 [2024-07-24 20:08:57.897863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.975 [2024-07-24 20:08:57.897893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.975 [2024-07-24 20:08:57.897902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.975 [2024-07-24 20:08:57.897909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.975 [2024-07-24 20:08:57.897931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.975 qpair failed and we were unable to recover it. 00:29:09.975 [2024-07-24 20:08:57.907784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.975 [2024-07-24 20:08:57.907887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.975 [2024-07-24 20:08:57.907915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.975 [2024-07-24 20:08:57.907924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.975 [2024-07-24 20:08:57.907931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.975 [2024-07-24 20:08:57.907952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.975 qpair failed and we were unable to recover it. 00:29:09.975 [2024-07-24 20:08:57.917942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.975 [2024-07-24 20:08:57.918067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.975 [2024-07-24 20:08:57.918103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.975 [2024-07-24 20:08:57.918112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.975 [2024-07-24 20:08:57.918119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:09.975 [2024-07-24 20:08:57.918141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.975 qpair failed and we were unable to recover it. 00:29:10.239 [2024-07-24 20:08:57.927822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.239 [2024-07-24 20:08:57.927937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.239 [2024-07-24 20:08:57.927964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.239 [2024-07-24 20:08:57.927973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.239 [2024-07-24 20:08:57.927980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.239 [2024-07-24 20:08:57.928002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.239 qpair failed and we were unable to recover it. 00:29:10.239 [2024-07-24 20:08:57.937867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.239 [2024-07-24 20:08:57.937970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.239 [2024-07-24 20:08:57.937998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.239 [2024-07-24 20:08:57.938008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.239 [2024-07-24 20:08:57.938015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.239 [2024-07-24 20:08:57.938037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.239 qpair failed and we were unable to recover it. 00:29:10.239 [2024-07-24 20:08:57.947893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.239 [2024-07-24 20:08:57.947994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.239 [2024-07-24 20:08:57.948019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.239 [2024-07-24 20:08:57.948027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.239 [2024-07-24 20:08:57.948034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.239 [2024-07-24 20:08:57.948055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.239 qpair failed and we were unable to recover it. 00:29:10.239 [2024-07-24 20:08:57.957924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.239 [2024-07-24 20:08:57.958044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.239 [2024-07-24 20:08:57.958070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.239 [2024-07-24 20:08:57.958079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.239 [2024-07-24 20:08:57.958086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.239 [2024-07-24 20:08:57.958112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.239 qpair failed and we were unable to recover it. 00:29:10.239 [2024-07-24 20:08:57.967877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.239 [2024-07-24 20:08:57.967974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.239 [2024-07-24 20:08:57.967998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.239 [2024-07-24 20:08:57.968006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.239 [2024-07-24 20:08:57.968013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.239 [2024-07-24 20:08:57.968033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.239 qpair failed and we were unable to recover it. 00:29:10.239 [2024-07-24 20:08:57.977981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.239 [2024-07-24 20:08:57.978081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.239 [2024-07-24 20:08:57.978106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.239 [2024-07-24 20:08:57.978114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.239 [2024-07-24 20:08:57.978122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.239 [2024-07-24 20:08:57.978141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.239 qpair failed and we were unable to recover it. 00:29:10.239 [2024-07-24 20:08:57.987921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.239 [2024-07-24 20:08:57.988032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.239 [2024-07-24 20:08:57.988055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.239 [2024-07-24 20:08:57.988063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.239 [2024-07-24 20:08:57.988072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.239 [2024-07-24 20:08:57.988090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.239 qpair failed and we were unable to recover it. 00:29:10.239 [2024-07-24 20:08:57.997905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.239 [2024-07-24 20:08:57.998012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.239 [2024-07-24 20:08:57.998035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.239 [2024-07-24 20:08:57.998044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.239 [2024-07-24 20:08:57.998051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.239 [2024-07-24 20:08:57.998069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.239 qpair failed and we were unable to recover it. 00:29:10.239 [2024-07-24 20:08:58.008025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.239 [2024-07-24 20:08:58.008116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.239 [2024-07-24 20:08:58.008144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.239 [2024-07-24 20:08:58.008152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.239 [2024-07-24 20:08:58.008158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.239 [2024-07-24 20:08:58.008176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.239 qpair failed and we were unable to recover it. 00:29:10.240 [2024-07-24 20:08:58.018056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.240 [2024-07-24 20:08:58.018153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.240 [2024-07-24 20:08:58.018174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.240 [2024-07-24 20:08:58.018181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.240 [2024-07-24 20:08:58.018188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.240 [2024-07-24 20:08:58.018208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.240 qpair failed and we were unable to recover it. 00:29:10.240 [2024-07-24 20:08:58.028120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.240 [2024-07-24 20:08:58.028232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.240 [2024-07-24 20:08:58.028252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.240 [2024-07-24 20:08:58.028260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.240 [2024-07-24 20:08:58.028267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.240 [2024-07-24 20:08:58.028285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.240 qpair failed and we were unable to recover it. 00:29:10.240 [2024-07-24 20:08:58.038137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.240 [2024-07-24 20:08:58.038237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.240 [2024-07-24 20:08:58.038258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.240 [2024-07-24 20:08:58.038265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.240 [2024-07-24 20:08:58.038272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.240 [2024-07-24 20:08:58.038289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.240 qpair failed and we were unable to recover it. 00:29:10.240 [2024-07-24 20:08:58.048132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.240 [2024-07-24 20:08:58.048220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.240 [2024-07-24 20:08:58.048240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.240 [2024-07-24 20:08:58.048248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.240 [2024-07-24 20:08:58.048259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.240 [2024-07-24 20:08:58.048276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.240 qpair failed and we were unable to recover it. 00:29:10.240 [2024-07-24 20:08:58.058171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.240 [2024-07-24 20:08:58.058269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.240 [2024-07-24 20:08:58.058289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.240 [2024-07-24 20:08:58.058296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.240 [2024-07-24 20:08:58.058303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.240 [2024-07-24 20:08:58.058320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.240 qpair failed and we were unable to recover it. 00:29:10.240 [2024-07-24 20:08:58.068221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.240 [2024-07-24 20:08:58.068316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.240 [2024-07-24 20:08:58.068335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.240 [2024-07-24 20:08:58.068344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.240 [2024-07-24 20:08:58.068351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.240 [2024-07-24 20:08:58.068367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.240 qpair failed and we were unable to recover it. 00:29:10.240 [2024-07-24 20:08:58.078255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.240 [2024-07-24 20:08:58.078352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.240 [2024-07-24 20:08:58.078370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.240 [2024-07-24 20:08:58.078380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.240 [2024-07-24 20:08:58.078386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.240 [2024-07-24 20:08:58.078403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.240 qpair failed and we were unable to recover it. 00:29:10.240 [2024-07-24 20:08:58.088230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.240 [2024-07-24 20:08:58.088319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.240 [2024-07-24 20:08:58.088337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.240 [2024-07-24 20:08:58.088345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.240 [2024-07-24 20:08:58.088352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.240 [2024-07-24 20:08:58.088368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.240 qpair failed and we were unable to recover it. 00:29:10.240 [2024-07-24 20:08:58.098172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.240 [2024-07-24 20:08:58.098270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.240 [2024-07-24 20:08:58.098288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.240 [2024-07-24 20:08:58.098296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.240 [2024-07-24 20:08:58.098302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.241 [2024-07-24 20:08:58.098318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.241 qpair failed and we were unable to recover it. 00:29:10.241 [2024-07-24 20:08:58.108326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.241 [2024-07-24 20:08:58.108418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.241 [2024-07-24 20:08:58.108437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.241 [2024-07-24 20:08:58.108444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.241 [2024-07-24 20:08:58.108452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.241 [2024-07-24 20:08:58.108469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.241 qpair failed and we were unable to recover it. 00:29:10.241 [2024-07-24 20:08:58.118409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.241 [2024-07-24 20:08:58.118522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.241 [2024-07-24 20:08:58.118540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.241 [2024-07-24 20:08:58.118547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.241 [2024-07-24 20:08:58.118553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.241 [2024-07-24 20:08:58.118569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.241 qpair failed and we were unable to recover it. 00:29:10.241 [2024-07-24 20:08:58.128246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.241 [2024-07-24 20:08:58.128338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.241 [2024-07-24 20:08:58.128354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.241 [2024-07-24 20:08:58.128362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.241 [2024-07-24 20:08:58.128368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.241 [2024-07-24 20:08:58.128384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.241 qpair failed and we were unable to recover it. 00:29:10.241 [2024-07-24 20:08:58.138378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.241 [2024-07-24 20:08:58.138475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.241 [2024-07-24 20:08:58.138493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.241 [2024-07-24 20:08:58.138500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.241 [2024-07-24 20:08:58.138510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.241 [2024-07-24 20:08:58.138529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.241 qpair failed and we were unable to recover it. 00:29:10.241 [2024-07-24 20:08:58.148449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.241 [2024-07-24 20:08:58.148551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.241 [2024-07-24 20:08:58.148568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.241 [2024-07-24 20:08:58.148576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.241 [2024-07-24 20:08:58.148582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.241 [2024-07-24 20:08:58.148598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.241 qpair failed and we were unable to recover it. 00:29:10.241 [2024-07-24 20:08:58.158430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.241 [2024-07-24 20:08:58.158547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.241 [2024-07-24 20:08:58.158564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.241 [2024-07-24 20:08:58.158572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.241 [2024-07-24 20:08:58.158578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.241 [2024-07-24 20:08:58.158593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.241 qpair failed and we were unable to recover it. 00:29:10.241 [2024-07-24 20:08:58.168428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.241 [2024-07-24 20:08:58.168515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.241 [2024-07-24 20:08:58.168532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.241 [2024-07-24 20:08:58.168539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.241 [2024-07-24 20:08:58.168546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.241 [2024-07-24 20:08:58.168561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.241 qpair failed and we were unable to recover it. 00:29:10.241 [2024-07-24 20:08:58.178491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.241 [2024-07-24 20:08:58.178581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.241 [2024-07-24 20:08:58.178598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.241 [2024-07-24 20:08:58.178605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.241 [2024-07-24 20:08:58.178612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.241 [2024-07-24 20:08:58.178627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.241 qpair failed and we were unable to recover it. 00:29:10.241 [2024-07-24 20:08:58.188540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.241 [2024-07-24 20:08:58.188636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.241 [2024-07-24 20:08:58.188653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.241 [2024-07-24 20:08:58.188660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.241 [2024-07-24 20:08:58.188667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.241 [2024-07-24 20:08:58.188682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.241 qpair failed and we were unable to recover it. 00:29:10.504 [2024-07-24 20:08:58.198577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.504 [2024-07-24 20:08:58.198672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.504 [2024-07-24 20:08:58.198688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.504 [2024-07-24 20:08:58.198695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.504 [2024-07-24 20:08:58.198702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.504 [2024-07-24 20:08:58.198718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.504 qpair failed and we were unable to recover it. 00:29:10.504 [2024-07-24 20:08:58.208514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.504 [2024-07-24 20:08:58.208604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.504 [2024-07-24 20:08:58.208621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.504 [2024-07-24 20:08:58.208629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.504 [2024-07-24 20:08:58.208635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.504 [2024-07-24 20:08:58.208651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.504 qpair failed and we were unable to recover it. 00:29:10.504 [2024-07-24 20:08:58.218579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.504 [2024-07-24 20:08:58.218676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.504 [2024-07-24 20:08:58.218693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.504 [2024-07-24 20:08:58.218701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.504 [2024-07-24 20:08:58.218707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.504 [2024-07-24 20:08:58.218722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.504 qpair failed and we were unable to recover it. 00:29:10.504 [2024-07-24 20:08:58.228657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.504 [2024-07-24 20:08:58.228751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.504 [2024-07-24 20:08:58.228768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.504 [2024-07-24 20:08:58.228779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.504 [2024-07-24 20:08:58.228785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.504 [2024-07-24 20:08:58.228801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.504 qpair failed and we were unable to recover it. 00:29:10.504 [2024-07-24 20:08:58.238665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.505 [2024-07-24 20:08:58.238766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.505 [2024-07-24 20:08:58.238792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.505 [2024-07-24 20:08:58.238801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.505 [2024-07-24 20:08:58.238808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.505 [2024-07-24 20:08:58.238828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.505 qpair failed and we were unable to recover it. 00:29:10.505 [2024-07-24 20:08:58.248636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.505 [2024-07-24 20:08:58.248727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.505 [2024-07-24 20:08:58.248754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.505 [2024-07-24 20:08:58.248763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.505 [2024-07-24 20:08:58.248771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.505 [2024-07-24 20:08:58.248791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.505 qpair failed and we were unable to recover it. 00:29:10.505 [2024-07-24 20:08:58.258679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.505 [2024-07-24 20:08:58.258764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.505 [2024-07-24 20:08:58.258783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.505 [2024-07-24 20:08:58.258791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.505 [2024-07-24 20:08:58.258798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.505 [2024-07-24 20:08:58.258815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.505 qpair failed and we were unable to recover it. 00:29:10.505 [2024-07-24 20:08:58.268733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.505 [2024-07-24 20:08:58.268827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.505 [2024-07-24 20:08:58.268844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.505 [2024-07-24 20:08:58.268852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.505 [2024-07-24 20:08:58.268858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.505 [2024-07-24 20:08:58.268874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.505 qpair failed and we were unable to recover it. 00:29:10.505 [2024-07-24 20:08:58.278781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.505 [2024-07-24 20:08:58.278894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.505 [2024-07-24 20:08:58.278920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.505 [2024-07-24 20:08:58.278929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.505 [2024-07-24 20:08:58.278937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.505 [2024-07-24 20:08:58.278957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.505 qpair failed and we were unable to recover it. 00:29:10.505 [2024-07-24 20:08:58.288831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.505 [2024-07-24 20:08:58.288966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.505 [2024-07-24 20:08:58.288992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.505 [2024-07-24 20:08:58.289001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.505 [2024-07-24 20:08:58.289008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.505 [2024-07-24 20:08:58.289028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.505 qpair failed and we were unable to recover it. 00:29:10.505 [2024-07-24 20:08:58.298826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.505 [2024-07-24 20:08:58.298922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.505 [2024-07-24 20:08:58.298948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.505 [2024-07-24 20:08:58.298957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.505 [2024-07-24 20:08:58.298964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.505 [2024-07-24 20:08:58.298984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.505 qpair failed and we were unable to recover it. 00:29:10.505 [2024-07-24 20:08:58.308965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.505 [2024-07-24 20:08:58.309068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.505 [2024-07-24 20:08:58.309094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.505 [2024-07-24 20:08:58.309103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.505 [2024-07-24 20:08:58.309110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.505 [2024-07-24 20:08:58.309131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.505 qpair failed and we were unable to recover it. 00:29:10.505 [2024-07-24 20:08:58.318790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.505 [2024-07-24 20:08:58.319043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.505 [2024-07-24 20:08:58.319070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.505 [2024-07-24 20:08:58.319078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.505 [2024-07-24 20:08:58.319085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.505 [2024-07-24 20:08:58.319103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.505 qpair failed and we were unable to recover it. 00:29:10.505 [2024-07-24 20:08:58.328862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.505 [2024-07-24 20:08:58.328947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.505 [2024-07-24 20:08:58.328964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.505 [2024-07-24 20:08:58.328973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.505 [2024-07-24 20:08:58.328979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.505 [2024-07-24 20:08:58.328995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.505 qpair failed and we were unable to recover it. 00:29:10.505 [2024-07-24 20:08:58.338895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.505 [2024-07-24 20:08:58.338987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.505 [2024-07-24 20:08:58.339004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.505 [2024-07-24 20:08:58.339011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.505 [2024-07-24 20:08:58.339018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.505 [2024-07-24 20:08:58.339033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.505 qpair failed and we were unable to recover it. 00:29:10.505 [2024-07-24 20:08:58.348919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.505 [2024-07-24 20:08:58.349011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.505 [2024-07-24 20:08:58.349028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.505 [2024-07-24 20:08:58.349035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.505 [2024-07-24 20:08:58.349042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.505 [2024-07-24 20:08:58.349057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.505 qpair failed and we were unable to recover it. 00:29:10.505 [2024-07-24 20:08:58.358991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.506 [2024-07-24 20:08:58.359085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.506 [2024-07-24 20:08:58.359103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.506 [2024-07-24 20:08:58.359110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.506 [2024-07-24 20:08:58.359116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.506 [2024-07-24 20:08:58.359135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.506 qpair failed and we were unable to recover it. 00:29:10.506 [2024-07-24 20:08:58.368965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.506 [2024-07-24 20:08:58.369048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.506 [2024-07-24 20:08:58.369064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.506 [2024-07-24 20:08:58.369071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.506 [2024-07-24 20:08:58.369078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.506 [2024-07-24 20:08:58.369093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.506 qpair failed and we were unable to recover it. 00:29:10.506 [2024-07-24 20:08:58.378941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.506 [2024-07-24 20:08:58.379028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.506 [2024-07-24 20:08:58.379045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.506 [2024-07-24 20:08:58.379052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.506 [2024-07-24 20:08:58.379059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.506 [2024-07-24 20:08:58.379073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.506 qpair failed and we were unable to recover it. 00:29:10.506 [2024-07-24 20:08:58.389099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.506 [2024-07-24 20:08:58.389192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.506 [2024-07-24 20:08:58.389213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.506 [2024-07-24 20:08:58.389221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.506 [2024-07-24 20:08:58.389227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.506 [2024-07-24 20:08:58.389243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.506 qpair failed and we were unable to recover it. 00:29:10.506 [2024-07-24 20:08:58.399116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.506 [2024-07-24 20:08:58.399215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.506 [2024-07-24 20:08:58.399232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.506 [2024-07-24 20:08:58.399239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.506 [2024-07-24 20:08:58.399245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.506 [2024-07-24 20:08:58.399261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.506 qpair failed and we were unable to recover it. 00:29:10.506 [2024-07-24 20:08:58.408965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.506 [2024-07-24 20:08:58.409049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.506 [2024-07-24 20:08:58.409071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.506 [2024-07-24 20:08:58.409078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.506 [2024-07-24 20:08:58.409085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.506 [2024-07-24 20:08:58.409099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.506 qpair failed and we were unable to recover it. 00:29:10.506 [2024-07-24 20:08:58.419131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.506 [2024-07-24 20:08:58.419219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.506 [2024-07-24 20:08:58.419236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.506 [2024-07-24 20:08:58.419245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.506 [2024-07-24 20:08:58.419252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.506 [2024-07-24 20:08:58.419268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.506 qpair failed and we were unable to recover it. 00:29:10.506 [2024-07-24 20:08:58.429210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.506 [2024-07-24 20:08:58.429294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.506 [2024-07-24 20:08:58.429311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.506 [2024-07-24 20:08:58.429319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.506 [2024-07-24 20:08:58.429325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.506 [2024-07-24 20:08:58.429341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.506 qpair failed and we were unable to recover it. 00:29:10.506 [2024-07-24 20:08:58.439096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.506 [2024-07-24 20:08:58.439189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.506 [2024-07-24 20:08:58.439209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.506 [2024-07-24 20:08:58.439217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.506 [2024-07-24 20:08:58.439223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.506 [2024-07-24 20:08:58.439238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.506 qpair failed and we were unable to recover it. 00:29:10.506 [2024-07-24 20:08:58.449231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.506 [2024-07-24 20:08:58.449316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.506 [2024-07-24 20:08:58.449332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.506 [2024-07-24 20:08:58.449339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.506 [2024-07-24 20:08:58.449346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.506 [2024-07-24 20:08:58.449365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.506 qpair failed and we were unable to recover it. 00:29:10.769 [2024-07-24 20:08:58.459214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.769 [2024-07-24 20:08:58.459299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.769 [2024-07-24 20:08:58.459316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.769 [2024-07-24 20:08:58.459324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.769 [2024-07-24 20:08:58.459331] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.769 [2024-07-24 20:08:58.459347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.769 qpair failed and we were unable to recover it. 00:29:10.769 [2024-07-24 20:08:58.469352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.769 [2024-07-24 20:08:58.469456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.769 [2024-07-24 20:08:58.469473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.769 [2024-07-24 20:08:58.469480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.769 [2024-07-24 20:08:58.469487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.769 [2024-07-24 20:08:58.469504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.769 qpair failed and we were unable to recover it. 00:29:10.769 [2024-07-24 20:08:58.479343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.769 [2024-07-24 20:08:58.479439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.769 [2024-07-24 20:08:58.479456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.769 [2024-07-24 20:08:58.479464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.769 [2024-07-24 20:08:58.479471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.769 [2024-07-24 20:08:58.479486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.769 qpair failed and we were unable to recover it. 00:29:10.769 [2024-07-24 20:08:58.489329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.769 [2024-07-24 20:08:58.489412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.769 [2024-07-24 20:08:58.489428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.769 [2024-07-24 20:08:58.489437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.769 [2024-07-24 20:08:58.489443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.769 [2024-07-24 20:08:58.489458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.769 qpair failed and we were unable to recover it. 00:29:10.769 [2024-07-24 20:08:58.499367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.769 [2024-07-24 20:08:58.499455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.769 [2024-07-24 20:08:58.499472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.769 [2024-07-24 20:08:58.499480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.769 [2024-07-24 20:08:58.499486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.769 [2024-07-24 20:08:58.499501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.769 qpair failed and we were unable to recover it. 00:29:10.769 [2024-07-24 20:08:58.509312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.769 [2024-07-24 20:08:58.509409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.769 [2024-07-24 20:08:58.509426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.769 [2024-07-24 20:08:58.509433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.769 [2024-07-24 20:08:58.509440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.769 [2024-07-24 20:08:58.509455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.769 qpair failed and we were unable to recover it. 00:29:10.769 [2024-07-24 20:08:58.519443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.769 [2024-07-24 20:08:58.519542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.769 [2024-07-24 20:08:58.519559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.769 [2024-07-24 20:08:58.519566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.769 [2024-07-24 20:08:58.519573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.769 [2024-07-24 20:08:58.519589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.769 qpair failed and we were unable to recover it. 00:29:10.769 [2024-07-24 20:08:58.529391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.769 [2024-07-24 20:08:58.529476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.769 [2024-07-24 20:08:58.529493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.769 [2024-07-24 20:08:58.529501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.769 [2024-07-24 20:08:58.529508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.770 [2024-07-24 20:08:58.529523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.770 qpair failed and we were unable to recover it. 00:29:10.770 [2024-07-24 20:08:58.539430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.770 [2024-07-24 20:08:58.539521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.770 [2024-07-24 20:08:58.539538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.770 [2024-07-24 20:08:58.539546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.770 [2024-07-24 20:08:58.539556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.770 [2024-07-24 20:08:58.539572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.770 qpair failed and we were unable to recover it. 00:29:10.770 [2024-07-24 20:08:58.549523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.770 [2024-07-24 20:08:58.549616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.770 [2024-07-24 20:08:58.549633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.770 [2024-07-24 20:08:58.549640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.770 [2024-07-24 20:08:58.549646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.770 [2024-07-24 20:08:58.549661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.770 qpair failed and we were unable to recover it. 00:29:10.770 [2024-07-24 20:08:58.559501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.770 [2024-07-24 20:08:58.559602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.770 [2024-07-24 20:08:58.559619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.770 [2024-07-24 20:08:58.559627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.770 [2024-07-24 20:08:58.559633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.770 [2024-07-24 20:08:58.559648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.770 qpair failed and we were unable to recover it. 00:29:10.770 [2024-07-24 20:08:58.569475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.770 [2024-07-24 20:08:58.569560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.770 [2024-07-24 20:08:58.569577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.770 [2024-07-24 20:08:58.569584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.770 [2024-07-24 20:08:58.569591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.770 [2024-07-24 20:08:58.569606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.770 qpair failed and we were unable to recover it. 00:29:10.770 [2024-07-24 20:08:58.579561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.770 [2024-07-24 20:08:58.579651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.770 [2024-07-24 20:08:58.579669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.770 [2024-07-24 20:08:58.579676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.770 [2024-07-24 20:08:58.579682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.770 [2024-07-24 20:08:58.579699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.770 qpair failed and we were unable to recover it. 00:29:10.770 [2024-07-24 20:08:58.589643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.770 [2024-07-24 20:08:58.589736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.770 [2024-07-24 20:08:58.589754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.770 [2024-07-24 20:08:58.589763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.770 [2024-07-24 20:08:58.589769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.770 [2024-07-24 20:08:58.589785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.770 qpair failed and we were unable to recover it. 00:29:10.770 [2024-07-24 20:08:58.599672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.770 [2024-07-24 20:08:58.599802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.770 [2024-07-24 20:08:58.599820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.770 [2024-07-24 20:08:58.599828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.770 [2024-07-24 20:08:58.599835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.770 [2024-07-24 20:08:58.599853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.770 qpair failed and we were unable to recover it. 00:29:10.770 [2024-07-24 20:08:58.609644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.770 [2024-07-24 20:08:58.609731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.770 [2024-07-24 20:08:58.609748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.770 [2024-07-24 20:08:58.609755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.770 [2024-07-24 20:08:58.609762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.770 [2024-07-24 20:08:58.609777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.770 qpair failed and we were unable to recover it. 00:29:10.770 [2024-07-24 20:08:58.619542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.770 [2024-07-24 20:08:58.619631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.770 [2024-07-24 20:08:58.619648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.770 [2024-07-24 20:08:58.619655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.770 [2024-07-24 20:08:58.619661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.770 [2024-07-24 20:08:58.619677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.770 qpair failed and we were unable to recover it. 00:29:10.770 [2024-07-24 20:08:58.629737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.770 [2024-07-24 20:08:58.629828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.770 [2024-07-24 20:08:58.629846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.770 [2024-07-24 20:08:58.629857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.770 [2024-07-24 20:08:58.629863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.770 [2024-07-24 20:08:58.629878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.770 qpair failed and we were unable to recover it. 00:29:10.770 [2024-07-24 20:08:58.639749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.770 [2024-07-24 20:08:58.639867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.770 [2024-07-24 20:08:58.639893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.770 [2024-07-24 20:08:58.639902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.770 [2024-07-24 20:08:58.639909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.770 [2024-07-24 20:08:58.639929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.770 qpair failed and we were unable to recover it. 00:29:10.770 [2024-07-24 20:08:58.649813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.770 [2024-07-24 20:08:58.649946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.770 [2024-07-24 20:08:58.649972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.770 [2024-07-24 20:08:58.649981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.770 [2024-07-24 20:08:58.649988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.770 [2024-07-24 20:08:58.650008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.770 qpair failed and we were unable to recover it. 00:29:10.770 [2024-07-24 20:08:58.659761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.770 [2024-07-24 20:08:58.659851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.770 [2024-07-24 20:08:58.659869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.770 [2024-07-24 20:08:58.659877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.770 [2024-07-24 20:08:58.659885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.770 [2024-07-24 20:08:58.659901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.770 qpair failed and we were unable to recover it. 00:29:10.770 [2024-07-24 20:08:58.669810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.770 [2024-07-24 20:08:58.669906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.770 [2024-07-24 20:08:58.669923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.770 [2024-07-24 20:08:58.669930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.770 [2024-07-24 20:08:58.669936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.770 [2024-07-24 20:08:58.669953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.770 qpair failed and we were unable to recover it. 00:29:10.770 [2024-07-24 20:08:58.679839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.770 [2024-07-24 20:08:58.679941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.770 [2024-07-24 20:08:58.679967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.770 [2024-07-24 20:08:58.679976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.770 [2024-07-24 20:08:58.679983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.770 [2024-07-24 20:08:58.680003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.770 qpair failed and we were unable to recover it. 00:29:10.770 [2024-07-24 20:08:58.689778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.770 [2024-07-24 20:08:58.689868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.770 [2024-07-24 20:08:58.689888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.770 [2024-07-24 20:08:58.689896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.771 [2024-07-24 20:08:58.689902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.771 [2024-07-24 20:08:58.689923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.771 qpair failed and we were unable to recover it. 00:29:10.771 [2024-07-24 20:08:58.699899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.771 [2024-07-24 20:08:58.699989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.771 [2024-07-24 20:08:58.700007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.771 [2024-07-24 20:08:58.700014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.771 [2024-07-24 20:08:58.700021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.771 [2024-07-24 20:08:58.700036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.771 qpair failed and we were unable to recover it. 00:29:10.771 [2024-07-24 20:08:58.709951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.771 [2024-07-24 20:08:58.710083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.771 [2024-07-24 20:08:58.710101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.771 [2024-07-24 20:08:58.710109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.771 [2024-07-24 20:08:58.710116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.771 [2024-07-24 20:08:58.710131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.771 qpair failed and we were unable to recover it. 00:29:10.771 [2024-07-24 20:08:58.719839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.771 [2024-07-24 20:08:58.719937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.771 [2024-07-24 20:08:58.719960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.771 [2024-07-24 20:08:58.719968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.771 [2024-07-24 20:08:58.719975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:10.771 [2024-07-24 20:08:58.719991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.771 qpair failed and we were unable to recover it. 00:29:11.034 [2024-07-24 20:08:58.729993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.034 [2024-07-24 20:08:58.730080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.034 [2024-07-24 20:08:58.730097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.034 [2024-07-24 20:08:58.730105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.034 [2024-07-24 20:08:58.730112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.034 [2024-07-24 20:08:58.730128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.034 qpair failed and we were unable to recover it. 00:29:11.034 [2024-07-24 20:08:58.739990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.034 [2024-07-24 20:08:58.740082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.034 [2024-07-24 20:08:58.740098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.034 [2024-07-24 20:08:58.740106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.034 [2024-07-24 20:08:58.740113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.034 [2024-07-24 20:08:58.740128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.034 qpair failed and we were unable to recover it. 00:29:11.034 [2024-07-24 20:08:58.750075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.034 [2024-07-24 20:08:58.750171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.034 [2024-07-24 20:08:58.750187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.034 [2024-07-24 20:08:58.750196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.034 [2024-07-24 20:08:58.750209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.034 [2024-07-24 20:08:58.750225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.034 qpair failed and we were unable to recover it. 00:29:11.034 [2024-07-24 20:08:58.760093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.034 [2024-07-24 20:08:58.760220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.034 [2024-07-24 20:08:58.760240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.034 [2024-07-24 20:08:58.760248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.034 [2024-07-24 20:08:58.760255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.034 [2024-07-24 20:08:58.760275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.034 qpair failed and we were unable to recover it. 00:29:11.034 [2024-07-24 20:08:58.770073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.034 [2024-07-24 20:08:58.770161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.034 [2024-07-24 20:08:58.770178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.034 [2024-07-24 20:08:58.770186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.034 [2024-07-24 20:08:58.770192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.034 [2024-07-24 20:08:58.770212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.034 qpair failed and we were unable to recover it. 00:29:11.034 [2024-07-24 20:08:58.780087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.034 [2024-07-24 20:08:58.780178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.034 [2024-07-24 20:08:58.780195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.034 [2024-07-24 20:08:58.780207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.034 [2024-07-24 20:08:58.780214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.034 [2024-07-24 20:08:58.780230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.034 qpair failed and we were unable to recover it. 00:29:11.034 [2024-07-24 20:08:58.790175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.034 [2024-07-24 20:08:58.790274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.034 [2024-07-24 20:08:58.790291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.034 [2024-07-24 20:08:58.790299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.034 [2024-07-24 20:08:58.790305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.034 [2024-07-24 20:08:58.790321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.034 qpair failed and we were unable to recover it. 00:29:11.034 [2024-07-24 20:08:58.800145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.034 [2024-07-24 20:08:58.800242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.034 [2024-07-24 20:08:58.800260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.034 [2024-07-24 20:08:58.800268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.034 [2024-07-24 20:08:58.800274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.034 [2024-07-24 20:08:58.800290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.034 qpair failed and we were unable to recover it. 00:29:11.034 [2024-07-24 20:08:58.810189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.034 [2024-07-24 20:08:58.810280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.034 [2024-07-24 20:08:58.810301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.034 [2024-07-24 20:08:58.810309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.034 [2024-07-24 20:08:58.810315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.034 [2024-07-24 20:08:58.810330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.034 qpair failed and we were unable to recover it. 00:29:11.034 [2024-07-24 20:08:58.820217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.034 [2024-07-24 20:08:58.820307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.034 [2024-07-24 20:08:58.820324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.034 [2024-07-24 20:08:58.820332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.034 [2024-07-24 20:08:58.820338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.034 [2024-07-24 20:08:58.820353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.034 qpair failed and we were unable to recover it. 00:29:11.034 [2024-07-24 20:08:58.830301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.034 [2024-07-24 20:08:58.830396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.034 [2024-07-24 20:08:58.830412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.034 [2024-07-24 20:08:58.830421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.034 [2024-07-24 20:08:58.830428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.034 [2024-07-24 20:08:58.830443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.034 qpair failed and we were unable to recover it. 00:29:11.034 [2024-07-24 20:08:58.840390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.034 [2024-07-24 20:08:58.840481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.035 [2024-07-24 20:08:58.840498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.035 [2024-07-24 20:08:58.840505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.035 [2024-07-24 20:08:58.840512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.035 [2024-07-24 20:08:58.840527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.035 qpair failed and we were unable to recover it. 00:29:11.035 [2024-07-24 20:08:58.850305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.035 [2024-07-24 20:08:58.850388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.035 [2024-07-24 20:08:58.850404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.035 [2024-07-24 20:08:58.850412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.035 [2024-07-24 20:08:58.850418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.035 [2024-07-24 20:08:58.850438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.035 qpair failed and we were unable to recover it. 00:29:11.035 [2024-07-24 20:08:58.860292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.035 [2024-07-24 20:08:58.860381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.035 [2024-07-24 20:08:58.860397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.035 [2024-07-24 20:08:58.860404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.035 [2024-07-24 20:08:58.860411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.035 [2024-07-24 20:08:58.860426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.035 qpair failed and we were unable to recover it. 00:29:11.035 [2024-07-24 20:08:58.870265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.035 [2024-07-24 20:08:58.870373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.035 [2024-07-24 20:08:58.870390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.035 [2024-07-24 20:08:58.870398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.035 [2024-07-24 20:08:58.870404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.035 [2024-07-24 20:08:58.870419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.035 qpair failed and we were unable to recover it. 00:29:11.035 [2024-07-24 20:08:58.880374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.035 [2024-07-24 20:08:58.880465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.035 [2024-07-24 20:08:58.880482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.035 [2024-07-24 20:08:58.880489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.035 [2024-07-24 20:08:58.880496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.035 [2024-07-24 20:08:58.880511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.035 qpair failed and we were unable to recover it. 00:29:11.035 [2024-07-24 20:08:58.890478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.035 [2024-07-24 20:08:58.890564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.035 [2024-07-24 20:08:58.890580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.035 [2024-07-24 20:08:58.890588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.035 [2024-07-24 20:08:58.890595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.035 [2024-07-24 20:08:58.890611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.035 qpair failed and we were unable to recover it. 00:29:11.035 [2024-07-24 20:08:58.900303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.035 [2024-07-24 20:08:58.900391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.035 [2024-07-24 20:08:58.900412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.035 [2024-07-24 20:08:58.900419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.035 [2024-07-24 20:08:58.900426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.035 [2024-07-24 20:08:58.900441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.035 qpair failed and we were unable to recover it. 00:29:11.035 [2024-07-24 20:08:58.910518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.035 [2024-07-24 20:08:58.910612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.035 [2024-07-24 20:08:58.910629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.035 [2024-07-24 20:08:58.910636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.035 [2024-07-24 20:08:58.910642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.035 [2024-07-24 20:08:58.910658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.035 qpair failed and we were unable to recover it. 00:29:11.035 [2024-07-24 20:08:58.920496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.035 [2024-07-24 20:08:58.920588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.035 [2024-07-24 20:08:58.920605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.035 [2024-07-24 20:08:58.920613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.035 [2024-07-24 20:08:58.920620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.035 [2024-07-24 20:08:58.920635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.035 qpair failed and we were unable to recover it. 00:29:11.035 [2024-07-24 20:08:58.930496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.035 [2024-07-24 20:08:58.930586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.035 [2024-07-24 20:08:58.930602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.035 [2024-07-24 20:08:58.930610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.035 [2024-07-24 20:08:58.930616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.035 [2024-07-24 20:08:58.930631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.035 qpair failed and we were unable to recover it. 00:29:11.035 [2024-07-24 20:08:58.940604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.035 [2024-07-24 20:08:58.940690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.035 [2024-07-24 20:08:58.940707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.035 [2024-07-24 20:08:58.940714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.035 [2024-07-24 20:08:58.940728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.035 [2024-07-24 20:08:58.940743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.035 qpair failed and we were unable to recover it. 00:29:11.035 [2024-07-24 20:08:58.950486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.035 [2024-07-24 20:08:58.950578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.035 [2024-07-24 20:08:58.950594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.035 [2024-07-24 20:08:58.950602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.035 [2024-07-24 20:08:58.950608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.035 [2024-07-24 20:08:58.950623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.035 qpair failed and we were unable to recover it. 00:29:11.035 [2024-07-24 20:08:58.960585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.035 [2024-07-24 20:08:58.960672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.036 [2024-07-24 20:08:58.960688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.036 [2024-07-24 20:08:58.960697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.036 [2024-07-24 20:08:58.960703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.036 [2024-07-24 20:08:58.960718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.036 qpair failed and we were unable to recover it. 00:29:11.036 [2024-07-24 20:08:58.970602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.036 [2024-07-24 20:08:58.970685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.036 [2024-07-24 20:08:58.970701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.036 [2024-07-24 20:08:58.970709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.036 [2024-07-24 20:08:58.970715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.036 [2024-07-24 20:08:58.970731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.036 qpair failed and we were unable to recover it. 00:29:11.036 [2024-07-24 20:08:58.980663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.036 [2024-07-24 20:08:58.980750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.036 [2024-07-24 20:08:58.980768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.036 [2024-07-24 20:08:58.980775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.036 [2024-07-24 20:08:58.980782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.036 [2024-07-24 20:08:58.980796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.036 qpair failed and we were unable to recover it. 00:29:11.298 [2024-07-24 20:08:58.990743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.298 [2024-07-24 20:08:58.990838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.298 [2024-07-24 20:08:58.990855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.298 [2024-07-24 20:08:58.990862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.298 [2024-07-24 20:08:58.990868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.298 [2024-07-24 20:08:58.990883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.298 qpair failed and we were unable to recover it. 00:29:11.298 [2024-07-24 20:08:59.000698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.298 [2024-07-24 20:08:59.000788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.298 [2024-07-24 20:08:59.000806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.298 [2024-07-24 20:08:59.000813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.298 [2024-07-24 20:08:59.000820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.298 [2024-07-24 20:08:59.000835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.298 qpair failed and we were unable to recover it. 00:29:11.298 [2024-07-24 20:08:59.010713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.298 [2024-07-24 20:08:59.010797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.298 [2024-07-24 20:08:59.010813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.298 [2024-07-24 20:08:59.010821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.298 [2024-07-24 20:08:59.010828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.298 [2024-07-24 20:08:59.010843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.298 qpair failed and we were unable to recover it. 00:29:11.298 [2024-07-24 20:08:59.020759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.298 [2024-07-24 20:08:59.020850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.298 [2024-07-24 20:08:59.020876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.298 [2024-07-24 20:08:59.020885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.298 [2024-07-24 20:08:59.020892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.298 [2024-07-24 20:08:59.020912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.298 qpair failed and we were unable to recover it. 00:29:11.298 [2024-07-24 20:08:59.030843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.298 [2024-07-24 20:08:59.030945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.298 [2024-07-24 20:08:59.030971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.298 [2024-07-24 20:08:59.030985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.298 [2024-07-24 20:08:59.030992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.298 [2024-07-24 20:08:59.031013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.298 qpair failed and we were unable to recover it. 00:29:11.298 [2024-07-24 20:08:59.040860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.298 [2024-07-24 20:08:59.040955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.298 [2024-07-24 20:08:59.040982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.298 [2024-07-24 20:08:59.040991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.298 [2024-07-24 20:08:59.040999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.298 [2024-07-24 20:08:59.041019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.298 qpair failed and we were unable to recover it. 00:29:11.298 [2024-07-24 20:08:59.050826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.298 [2024-07-24 20:08:59.050917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.298 [2024-07-24 20:08:59.050944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.298 [2024-07-24 20:08:59.050953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.299 [2024-07-24 20:08:59.050960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.299 [2024-07-24 20:08:59.050980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.299 qpair failed and we were unable to recover it. 00:29:11.299 [2024-07-24 20:08:59.060923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.299 [2024-07-24 20:08:59.061022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.299 [2024-07-24 20:08:59.061049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.299 [2024-07-24 20:08:59.061058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.299 [2024-07-24 20:08:59.061065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.299 [2024-07-24 20:08:59.061085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.299 qpair failed and we were unable to recover it. 00:29:11.299 [2024-07-24 20:08:59.070952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.299 [2024-07-24 20:08:59.071047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.299 [2024-07-24 20:08:59.071066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.299 [2024-07-24 20:08:59.071073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.299 [2024-07-24 20:08:59.071080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.299 [2024-07-24 20:08:59.071096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.299 qpair failed and we were unable to recover it. 00:29:11.299 [2024-07-24 20:08:59.080870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.299 [2024-07-24 20:08:59.080960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.299 [2024-07-24 20:08:59.080977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.299 [2024-07-24 20:08:59.080984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.299 [2024-07-24 20:08:59.080991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.299 [2024-07-24 20:08:59.081007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.299 qpair failed and we were unable to recover it. 00:29:11.299 [2024-07-24 20:08:59.090935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.299 [2024-07-24 20:08:59.091018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.299 [2024-07-24 20:08:59.091035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.299 [2024-07-24 20:08:59.091042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.299 [2024-07-24 20:08:59.091049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.299 [2024-07-24 20:08:59.091064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.299 qpair failed and we were unable to recover it. 00:29:11.299 [2024-07-24 20:08:59.100860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.299 [2024-07-24 20:08:59.100942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.299 [2024-07-24 20:08:59.100961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.299 [2024-07-24 20:08:59.100969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.299 [2024-07-24 20:08:59.100975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.299 [2024-07-24 20:08:59.100991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.299 qpair failed and we were unable to recover it. 00:29:11.299 [2024-07-24 20:08:59.111016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.299 [2024-07-24 20:08:59.111124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.299 [2024-07-24 20:08:59.111142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.299 [2024-07-24 20:08:59.111149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.299 [2024-07-24 20:08:59.111155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.299 [2024-07-24 20:08:59.111171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.299 qpair failed and we were unable to recover it. 00:29:11.299 [2024-07-24 20:08:59.120920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.299 [2024-07-24 20:08:59.121009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.299 [2024-07-24 20:08:59.121026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.299 [2024-07-24 20:08:59.121037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.299 [2024-07-24 20:08:59.121044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.299 [2024-07-24 20:08:59.121059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.299 qpair failed and we were unable to recover it. 00:29:11.299 [2024-07-24 20:08:59.131049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.299 [2024-07-24 20:08:59.131133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.299 [2024-07-24 20:08:59.131149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.299 [2024-07-24 20:08:59.131158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.299 [2024-07-24 20:08:59.131164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.299 [2024-07-24 20:08:59.131179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.299 qpair failed and we were unable to recover it. 00:29:11.299 [2024-07-24 20:08:59.141061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.299 [2024-07-24 20:08:59.141143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.299 [2024-07-24 20:08:59.141159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.299 [2024-07-24 20:08:59.141167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.299 [2024-07-24 20:08:59.141173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.299 [2024-07-24 20:08:59.141188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.299 qpair failed and we were unable to recover it. 00:29:11.299 [2024-07-24 20:08:59.151126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.299 [2024-07-24 20:08:59.151213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.299 [2024-07-24 20:08:59.151230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.299 [2024-07-24 20:08:59.151237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.299 [2024-07-24 20:08:59.151245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.299 [2024-07-24 20:08:59.151260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.299 qpair failed and we were unable to recover it. 00:29:11.299 [2024-07-24 20:08:59.161125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.299 [2024-07-24 20:08:59.161219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.299 [2024-07-24 20:08:59.161236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.299 [2024-07-24 20:08:59.161243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.299 [2024-07-24 20:08:59.161250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.299 [2024-07-24 20:08:59.161265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.299 qpair failed and we were unable to recover it. 00:29:11.299 [2024-07-24 20:08:59.171151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.299 [2024-07-24 20:08:59.171237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.299 [2024-07-24 20:08:59.171254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.299 [2024-07-24 20:08:59.171262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.300 [2024-07-24 20:08:59.171269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.300 [2024-07-24 20:08:59.171284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.300 qpair failed and we were unable to recover it. 00:29:11.300 [2024-07-24 20:08:59.181169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.300 [2024-07-24 20:08:59.181255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.300 [2024-07-24 20:08:59.181271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.300 [2024-07-24 20:08:59.181278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.300 [2024-07-24 20:08:59.181285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.300 [2024-07-24 20:08:59.181301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.300 qpair failed and we were unable to recover it. 00:29:11.300 [2024-07-24 20:08:59.191234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.300 [2024-07-24 20:08:59.191321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.300 [2024-07-24 20:08:59.191337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.300 [2024-07-24 20:08:59.191344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.300 [2024-07-24 20:08:59.191351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.300 [2024-07-24 20:08:59.191366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.300 qpair failed and we were unable to recover it. 00:29:11.300 [2024-07-24 20:08:59.201391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.300 [2024-07-24 20:08:59.201586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.300 [2024-07-24 20:08:59.201602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.300 [2024-07-24 20:08:59.201610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.300 [2024-07-24 20:08:59.201616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.300 [2024-07-24 20:08:59.201631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.300 qpair failed and we were unable to recover it. 00:29:11.300 [2024-07-24 20:08:59.211299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.300 [2024-07-24 20:08:59.211388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.300 [2024-07-24 20:08:59.211408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.300 [2024-07-24 20:08:59.211416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.300 [2024-07-24 20:08:59.211423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.300 [2024-07-24 20:08:59.211438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.300 qpair failed and we were unable to recover it. 00:29:11.300 [2024-07-24 20:08:59.221305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.300 [2024-07-24 20:08:59.221387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.300 [2024-07-24 20:08:59.221404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.300 [2024-07-24 20:08:59.221412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.300 [2024-07-24 20:08:59.221418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.300 [2024-07-24 20:08:59.221434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.300 qpair failed and we were unable to recover it. 00:29:11.300 [2024-07-24 20:08:59.231313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.300 [2024-07-24 20:08:59.231402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.300 [2024-07-24 20:08:59.231419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.300 [2024-07-24 20:08:59.231427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.300 [2024-07-24 20:08:59.231433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.300 [2024-07-24 20:08:59.231448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.300 qpair failed and we were unable to recover it. 00:29:11.300 [2024-07-24 20:08:59.241373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.300 [2024-07-24 20:08:59.241457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.300 [2024-07-24 20:08:59.241473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.300 [2024-07-24 20:08:59.241481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.300 [2024-07-24 20:08:59.241488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.300 [2024-07-24 20:08:59.241503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.300 qpair failed and we were unable to recover it. 00:29:11.563 [2024-07-24 20:08:59.251400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.563 [2024-07-24 20:08:59.251523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.563 [2024-07-24 20:08:59.251539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.563 [2024-07-24 20:08:59.251547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.563 [2024-07-24 20:08:59.251553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.563 [2024-07-24 20:08:59.251572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 [2024-07-24 20:08:59.261440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.563 [2024-07-24 20:08:59.261522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.563 [2024-07-24 20:08:59.261538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.563 [2024-07-24 20:08:59.261546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.563 [2024-07-24 20:08:59.261552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.563 [2024-07-24 20:08:59.261567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 [2024-07-24 20:08:59.271425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.563 [2024-07-24 20:08:59.271511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.563 [2024-07-24 20:08:59.271528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.563 [2024-07-24 20:08:59.271536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.563 [2024-07-24 20:08:59.271543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.563 [2024-07-24 20:08:59.271558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 [2024-07-24 20:08:59.281479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.563 [2024-07-24 20:08:59.281579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.563 [2024-07-24 20:08:59.281597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.563 [2024-07-24 20:08:59.281604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.563 [2024-07-24 20:08:59.281612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.563 [2024-07-24 20:08:59.281627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 [2024-07-24 20:08:59.291477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.563 [2024-07-24 20:08:59.291564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.563 [2024-07-24 20:08:59.291581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.563 [2024-07-24 20:08:59.291588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.563 [2024-07-24 20:08:59.291594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.563 [2024-07-24 20:08:59.291609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 [2024-07-24 20:08:59.301506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.563 [2024-07-24 20:08:59.301586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.563 [2024-07-24 20:08:59.301607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.563 [2024-07-24 20:08:59.301615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.563 [2024-07-24 20:08:59.301621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.563 [2024-07-24 20:08:59.301636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 [2024-07-24 20:08:59.311514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.563 [2024-07-24 20:08:59.311602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.563 [2024-07-24 20:08:59.311619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.563 [2024-07-24 20:08:59.311626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.563 [2024-07-24 20:08:59.311633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.563 [2024-07-24 20:08:59.311648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.563 qpair failed and we were unable to recover it. 00:29:11.563 [2024-07-24 20:08:59.321580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.563 [2024-07-24 20:08:59.321671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.563 [2024-07-24 20:08:59.321688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.563 [2024-07-24 20:08:59.321695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.563 [2024-07-24 20:08:59.321701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.563 [2024-07-24 20:08:59.321716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-24 20:08:59.331487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.564 [2024-07-24 20:08:59.331570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.564 [2024-07-24 20:08:59.331587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.564 [2024-07-24 20:08:59.331595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.564 [2024-07-24 20:08:59.331601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.564 [2024-07-24 20:08:59.331617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-24 20:08:59.341637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.564 [2024-07-24 20:08:59.341738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.564 [2024-07-24 20:08:59.341755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.564 [2024-07-24 20:08:59.341763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.564 [2024-07-24 20:08:59.341773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.564 [2024-07-24 20:08:59.341787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-24 20:08:59.351678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.564 [2024-07-24 20:08:59.351773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.564 [2024-07-24 20:08:59.351790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.564 [2024-07-24 20:08:59.351798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.564 [2024-07-24 20:08:59.351804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.564 [2024-07-24 20:08:59.351819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-24 20:08:59.361680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.564 [2024-07-24 20:08:59.361771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.564 [2024-07-24 20:08:59.361789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.564 [2024-07-24 20:08:59.361796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.564 [2024-07-24 20:08:59.361803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.564 [2024-07-24 20:08:59.361818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-24 20:08:59.371682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.564 [2024-07-24 20:08:59.371763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.564 [2024-07-24 20:08:59.371780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.564 [2024-07-24 20:08:59.371788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.564 [2024-07-24 20:08:59.371794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.564 [2024-07-24 20:08:59.371809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-24 20:08:59.381782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.564 [2024-07-24 20:08:59.381867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.564 [2024-07-24 20:08:59.381883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.564 [2024-07-24 20:08:59.381891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.564 [2024-07-24 20:08:59.381897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.564 [2024-07-24 20:08:59.381913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-24 20:08:59.391740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.564 [2024-07-24 20:08:59.391836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.564 [2024-07-24 20:08:59.391862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.564 [2024-07-24 20:08:59.391871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.564 [2024-07-24 20:08:59.391878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.564 [2024-07-24 20:08:59.391898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-24 20:08:59.401772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.564 [2024-07-24 20:08:59.401868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.564 [2024-07-24 20:08:59.401893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.564 [2024-07-24 20:08:59.401903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.564 [2024-07-24 20:08:59.401910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.564 [2024-07-24 20:08:59.401932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-24 20:08:59.411818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.564 [2024-07-24 20:08:59.411908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.564 [2024-07-24 20:08:59.411935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.564 [2024-07-24 20:08:59.411944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.564 [2024-07-24 20:08:59.411950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.564 [2024-07-24 20:08:59.411971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-24 20:08:59.421830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.564 [2024-07-24 20:08:59.421921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.564 [2024-07-24 20:08:59.421947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.564 [2024-07-24 20:08:59.421956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.564 [2024-07-24 20:08:59.421963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.564 [2024-07-24 20:08:59.421983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.564 qpair failed and we were unable to recover it. 00:29:11.564 [2024-07-24 20:08:59.431872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.564 [2024-07-24 20:08:59.431961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.564 [2024-07-24 20:08:59.431987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.564 [2024-07-24 20:08:59.432001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.564 [2024-07-24 20:08:59.432008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.565 [2024-07-24 20:08:59.432029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.565 qpair failed and we were unable to recover it. 00:29:11.565 [2024-07-24 20:08:59.441903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.565 [2024-07-24 20:08:59.442000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.565 [2024-07-24 20:08:59.442019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.565 [2024-07-24 20:08:59.442026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.565 [2024-07-24 20:08:59.442033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.565 [2024-07-24 20:08:59.442049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.565 qpair failed and we were unable to recover it. 00:29:11.565 [2024-07-24 20:08:59.451924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.565 [2024-07-24 20:08:59.452038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.565 [2024-07-24 20:08:59.452064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.565 [2024-07-24 20:08:59.452073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.565 [2024-07-24 20:08:59.452080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.565 [2024-07-24 20:08:59.452100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.565 qpair failed and we were unable to recover it. 00:29:11.565 [2024-07-24 20:08:59.461907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.565 [2024-07-24 20:08:59.462000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.565 [2024-07-24 20:08:59.462018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.565 [2024-07-24 20:08:59.462025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.565 [2024-07-24 20:08:59.462032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.565 [2024-07-24 20:08:59.462048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.565 qpair failed and we were unable to recover it. 00:29:11.565 [2024-07-24 20:08:59.471996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.565 [2024-07-24 20:08:59.472083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.565 [2024-07-24 20:08:59.472100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.565 [2024-07-24 20:08:59.472108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.565 [2024-07-24 20:08:59.472115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.565 [2024-07-24 20:08:59.472130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.565 qpair failed and we were unable to recover it. 00:29:11.565 [2024-07-24 20:08:59.481915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.565 [2024-07-24 20:08:59.482002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.565 [2024-07-24 20:08:59.482020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.565 [2024-07-24 20:08:59.482027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.565 [2024-07-24 20:08:59.482034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.565 [2024-07-24 20:08:59.482050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.565 qpair failed and we were unable to recover it. 00:29:11.565 [2024-07-24 20:08:59.492021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.565 [2024-07-24 20:08:59.492109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.565 [2024-07-24 20:08:59.492127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.565 [2024-07-24 20:08:59.492135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.565 [2024-07-24 20:08:59.492141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.565 [2024-07-24 20:08:59.492156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.565 qpair failed and we were unable to recover it. 00:29:11.565 [2024-07-24 20:08:59.502099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.565 [2024-07-24 20:08:59.502211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.565 [2024-07-24 20:08:59.502229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.565 [2024-07-24 20:08:59.502236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.565 [2024-07-24 20:08:59.502243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.565 [2024-07-24 20:08:59.502258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.565 qpair failed and we were unable to recover it. 00:29:11.565 [2024-07-24 20:08:59.512059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.565 [2024-07-24 20:08:59.512146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.565 [2024-07-24 20:08:59.512164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.565 [2024-07-24 20:08:59.512171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.565 [2024-07-24 20:08:59.512178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.565 [2024-07-24 20:08:59.512192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.565 qpair failed and we were unable to recover it. 00:29:11.827 [2024-07-24 20:08:59.522115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.827 [2024-07-24 20:08:59.522217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.827 [2024-07-24 20:08:59.522235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.827 [2024-07-24 20:08:59.522247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.827 [2024-07-24 20:08:59.522255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.827 [2024-07-24 20:08:59.522275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.827 qpair failed and we were unable to recover it. 00:29:11.827 [2024-07-24 20:08:59.532145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.827 [2024-07-24 20:08:59.532274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.827 [2024-07-24 20:08:59.532291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.827 [2024-07-24 20:08:59.532299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.827 [2024-07-24 20:08:59.532305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.827 [2024-07-24 20:08:59.532321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.827 qpair failed and we were unable to recover it. 00:29:11.827 [2024-07-24 20:08:59.542170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.827 [2024-07-24 20:08:59.542266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.827 [2024-07-24 20:08:59.542284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.827 [2024-07-24 20:08:59.542291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.827 [2024-07-24 20:08:59.542298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.827 [2024-07-24 20:08:59.542313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.827 qpair failed and we were unable to recover it. 00:29:11.827 [2024-07-24 20:08:59.552165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.827 [2024-07-24 20:08:59.552253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.827 [2024-07-24 20:08:59.552271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.827 [2024-07-24 20:08:59.552278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.827 [2024-07-24 20:08:59.552286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.827 [2024-07-24 20:08:59.552301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.827 qpair failed and we were unable to recover it. 00:29:11.827 [2024-07-24 20:08:59.562228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.827 [2024-07-24 20:08:59.562312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.827 [2024-07-24 20:08:59.562329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.827 [2024-07-24 20:08:59.562337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.827 [2024-07-24 20:08:59.562343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.827 [2024-07-24 20:08:59.562359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.827 qpair failed and we were unable to recover it. 00:29:11.827 [2024-07-24 20:08:59.572255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.827 [2024-07-24 20:08:59.572345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.827 [2024-07-24 20:08:59.572362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.827 [2024-07-24 20:08:59.572370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.827 [2024-07-24 20:08:59.572376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.827 [2024-07-24 20:08:59.572391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.827 qpair failed and we were unable to recover it. 00:29:11.827 [2024-07-24 20:08:59.582267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.827 [2024-07-24 20:08:59.582357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.827 [2024-07-24 20:08:59.582375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.827 [2024-07-24 20:08:59.582382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.827 [2024-07-24 20:08:59.582388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.827 [2024-07-24 20:08:59.582404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.827 qpair failed and we were unable to recover it. 00:29:11.827 [2024-07-24 20:08:59.592305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.827 [2024-07-24 20:08:59.592428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.827 [2024-07-24 20:08:59.592445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.827 [2024-07-24 20:08:59.592452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.827 [2024-07-24 20:08:59.592459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.827 [2024-07-24 20:08:59.592473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.827 qpair failed and we were unable to recover it. 00:29:11.827 [2024-07-24 20:08:59.602278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.827 [2024-07-24 20:08:59.602369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.827 [2024-07-24 20:08:59.602386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.827 [2024-07-24 20:08:59.602393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.827 [2024-07-24 20:08:59.602399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.827 [2024-07-24 20:08:59.602415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.827 qpair failed and we were unable to recover it. 00:29:11.827 [2024-07-24 20:08:59.612348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.828 [2024-07-24 20:08:59.612431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.828 [2024-07-24 20:08:59.612451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.828 [2024-07-24 20:08:59.612459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.828 [2024-07-24 20:08:59.612465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.828 [2024-07-24 20:08:59.612480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.828 qpair failed and we were unable to recover it. 00:29:11.828 [2024-07-24 20:08:59.622394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.828 [2024-07-24 20:08:59.622477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.828 [2024-07-24 20:08:59.622494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.828 [2024-07-24 20:08:59.622502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.828 [2024-07-24 20:08:59.622508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.828 [2024-07-24 20:08:59.622523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.828 qpair failed and we were unable to recover it. 00:29:11.828 [2024-07-24 20:08:59.632429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.828 [2024-07-24 20:08:59.632514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.828 [2024-07-24 20:08:59.632531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.828 [2024-07-24 20:08:59.632538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.828 [2024-07-24 20:08:59.632545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.828 [2024-07-24 20:08:59.632560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.828 qpair failed and we were unable to recover it. 00:29:11.828 [2024-07-24 20:08:59.642306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.828 [2024-07-24 20:08:59.642393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.828 [2024-07-24 20:08:59.642409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.828 [2024-07-24 20:08:59.642418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.828 [2024-07-24 20:08:59.642424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.828 [2024-07-24 20:08:59.642439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.828 qpair failed and we were unable to recover it. 00:29:11.828 [2024-07-24 20:08:59.652464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.828 [2024-07-24 20:08:59.652561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.828 [2024-07-24 20:08:59.652577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.828 [2024-07-24 20:08:59.652585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.828 [2024-07-24 20:08:59.652591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.828 [2024-07-24 20:08:59.652610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.828 qpair failed and we were unable to recover it. 00:29:11.828 [2024-07-24 20:08:59.662369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.828 [2024-07-24 20:08:59.662441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.828 [2024-07-24 20:08:59.662458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.828 [2024-07-24 20:08:59.662466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.828 [2024-07-24 20:08:59.662473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.828 [2024-07-24 20:08:59.662488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.828 qpair failed and we were unable to recover it. 00:29:11.828 [2024-07-24 20:08:59.672521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.828 [2024-07-24 20:08:59.672606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.828 [2024-07-24 20:08:59.672623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.828 [2024-07-24 20:08:59.672630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.828 [2024-07-24 20:08:59.672636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.828 [2024-07-24 20:08:59.672651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.828 qpair failed and we were unable to recover it. 00:29:11.828 [2024-07-24 20:08:59.682533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.828 [2024-07-24 20:08:59.682618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.828 [2024-07-24 20:08:59.682634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.828 [2024-07-24 20:08:59.682641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.828 [2024-07-24 20:08:59.682648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.828 [2024-07-24 20:08:59.682662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.828 qpair failed and we were unable to recover it. 00:29:11.828 [2024-07-24 20:08:59.692536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.828 [2024-07-24 20:08:59.692622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.828 [2024-07-24 20:08:59.692639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.828 [2024-07-24 20:08:59.692646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.828 [2024-07-24 20:08:59.692653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.828 [2024-07-24 20:08:59.692667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.828 qpair failed and we were unable to recover it. 00:29:11.828 [2024-07-24 20:08:59.702604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.828 [2024-07-24 20:08:59.702686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.828 [2024-07-24 20:08:59.702710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.828 [2024-07-24 20:08:59.702717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.828 [2024-07-24 20:08:59.702724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.828 [2024-07-24 20:08:59.702738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.828 qpair failed and we were unable to recover it. 00:29:11.828 [2024-07-24 20:08:59.712589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.828 [2024-07-24 20:08:59.712673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.828 [2024-07-24 20:08:59.712690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.828 [2024-07-24 20:08:59.712697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.828 [2024-07-24 20:08:59.712703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.828 [2024-07-24 20:08:59.712719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.828 qpair failed and we were unable to recover it. 00:29:11.828 [2024-07-24 20:08:59.722666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.828 [2024-07-24 20:08:59.722752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.828 [2024-07-24 20:08:59.722768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.828 [2024-07-24 20:08:59.722776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.828 [2024-07-24 20:08:59.722782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.828 [2024-07-24 20:08:59.722797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.828 qpair failed and we were unable to recover it. 00:29:11.828 [2024-07-24 20:08:59.732550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.829 [2024-07-24 20:08:59.732657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.829 [2024-07-24 20:08:59.732674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.829 [2024-07-24 20:08:59.732682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.829 [2024-07-24 20:08:59.732688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.829 [2024-07-24 20:08:59.732704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.829 qpair failed and we were unable to recover it. 00:29:11.829 [2024-07-24 20:08:59.742617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.829 [2024-07-24 20:08:59.742704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.829 [2024-07-24 20:08:59.742720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.829 [2024-07-24 20:08:59.742728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.829 [2024-07-24 20:08:59.742738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.829 [2024-07-24 20:08:59.742753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.829 qpair failed and we were unable to recover it. 00:29:11.829 [2024-07-24 20:08:59.752672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.829 [2024-07-24 20:08:59.752758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.829 [2024-07-24 20:08:59.752774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.829 [2024-07-24 20:08:59.752781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.829 [2024-07-24 20:08:59.752788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.829 [2024-07-24 20:08:59.752803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.829 qpair failed and we were unable to recover it. 00:29:11.829 [2024-07-24 20:08:59.762733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.829 [2024-07-24 20:08:59.762832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.829 [2024-07-24 20:08:59.762858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.829 [2024-07-24 20:08:59.762867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.829 [2024-07-24 20:08:59.762874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.829 [2024-07-24 20:08:59.762894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.829 qpair failed and we were unable to recover it. 00:29:11.829 [2024-07-24 20:08:59.772754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.829 [2024-07-24 20:08:59.772845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.829 [2024-07-24 20:08:59.772871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.829 [2024-07-24 20:08:59.772880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.829 [2024-07-24 20:08:59.772888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:11.829 [2024-07-24 20:08:59.772908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.829 qpair failed and we were unable to recover it. 00:29:12.092 [2024-07-24 20:08:59.782817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.092 [2024-07-24 20:08:59.782912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.092 [2024-07-24 20:08:59.782938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.092 [2024-07-24 20:08:59.782947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.092 [2024-07-24 20:08:59.782955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.092 [2024-07-24 20:08:59.782976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-07-24 20:08:59.792786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.092 [2024-07-24 20:08:59.792887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.092 [2024-07-24 20:08:59.792906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.092 [2024-07-24 20:08:59.792914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.092 [2024-07-24 20:08:59.792921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.092 [2024-07-24 20:08:59.792939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-07-24 20:08:59.802858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.092 [2024-07-24 20:08:59.802953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.092 [2024-07-24 20:08:59.802971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.092 [2024-07-24 20:08:59.802978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.092 [2024-07-24 20:08:59.802985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.092 [2024-07-24 20:08:59.803001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-07-24 20:08:59.812884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.092 [2024-07-24 20:08:59.812965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.092 [2024-07-24 20:08:59.812981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.092 [2024-07-24 20:08:59.812989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.092 [2024-07-24 20:08:59.812995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.092 [2024-07-24 20:08:59.813010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-07-24 20:08:59.822924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.092 [2024-07-24 20:08:59.823009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.092 [2024-07-24 20:08:59.823026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.092 [2024-07-24 20:08:59.823034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.092 [2024-07-24 20:08:59.823040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.092 [2024-07-24 20:08:59.823056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-07-24 20:08:59.832942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.092 [2024-07-24 20:08:59.833032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.092 [2024-07-24 20:08:59.833049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.092 [2024-07-24 20:08:59.833056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.092 [2024-07-24 20:08:59.833067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.092 [2024-07-24 20:08:59.833082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-07-24 20:08:59.842961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.092 [2024-07-24 20:08:59.843054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.092 [2024-07-24 20:08:59.843071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.092 [2024-07-24 20:08:59.843078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.092 [2024-07-24 20:08:59.843085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.092 [2024-07-24 20:08:59.843099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-07-24 20:08:59.853008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.092 [2024-07-24 20:08:59.853091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.092 [2024-07-24 20:08:59.853107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.092 [2024-07-24 20:08:59.853114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.092 [2024-07-24 20:08:59.853121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.092 [2024-07-24 20:08:59.853135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-07-24 20:08:59.863037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.092 [2024-07-24 20:08:59.863116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.092 [2024-07-24 20:08:59.863132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.092 [2024-07-24 20:08:59.863140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.092 [2024-07-24 20:08:59.863146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.092 [2024-07-24 20:08:59.863162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-07-24 20:08:59.873061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.092 [2024-07-24 20:08:59.873155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.092 [2024-07-24 20:08:59.873172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.092 [2024-07-24 20:08:59.873179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.092 [2024-07-24 20:08:59.873186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.092 [2024-07-24 20:08:59.873205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-07-24 20:08:59.883083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.092 [2024-07-24 20:08:59.883173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.092 [2024-07-24 20:08:59.883189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.092 [2024-07-24 20:08:59.883197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.092 [2024-07-24 20:08:59.883207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.092 [2024-07-24 20:08:59.883223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-07-24 20:08:59.893111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.092 [2024-07-24 20:08:59.893197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.092 [2024-07-24 20:08:59.893217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.092 [2024-07-24 20:08:59.893224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.093 [2024-07-24 20:08:59.893231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.093 [2024-07-24 20:08:59.893246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-07-24 20:08:59.903141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.093 [2024-07-24 20:08:59.903233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.093 [2024-07-24 20:08:59.903249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.093 [2024-07-24 20:08:59.903256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.093 [2024-07-24 20:08:59.903263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.093 [2024-07-24 20:08:59.903278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-07-24 20:08:59.913175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.093 [2024-07-24 20:08:59.913267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.093 [2024-07-24 20:08:59.913285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.093 [2024-07-24 20:08:59.913292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.093 [2024-07-24 20:08:59.913299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.093 [2024-07-24 20:08:59.913315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-07-24 20:08:59.923185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.093 [2024-07-24 20:08:59.923279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.093 [2024-07-24 20:08:59.923297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.093 [2024-07-24 20:08:59.923308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.093 [2024-07-24 20:08:59.923314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.093 [2024-07-24 20:08:59.923329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-07-24 20:08:59.933334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.093 [2024-07-24 20:08:59.933419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.093 [2024-07-24 20:08:59.933436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.093 [2024-07-24 20:08:59.933443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.093 [2024-07-24 20:08:59.933449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.093 [2024-07-24 20:08:59.933465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-07-24 20:08:59.943253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.093 [2024-07-24 20:08:59.943345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.093 [2024-07-24 20:08:59.943361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.093 [2024-07-24 20:08:59.943369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.093 [2024-07-24 20:08:59.943376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.093 [2024-07-24 20:08:59.943391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-07-24 20:08:59.953291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.093 [2024-07-24 20:08:59.953374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.093 [2024-07-24 20:08:59.953391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.093 [2024-07-24 20:08:59.953398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.093 [2024-07-24 20:08:59.953405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.093 [2024-07-24 20:08:59.953421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-07-24 20:08:59.963278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.093 [2024-07-24 20:08:59.963369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.093 [2024-07-24 20:08:59.963386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.093 [2024-07-24 20:08:59.963393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.093 [2024-07-24 20:08:59.963399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.093 [2024-07-24 20:08:59.963415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-07-24 20:08:59.973342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.093 [2024-07-24 20:08:59.973425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.093 [2024-07-24 20:08:59.973444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.093 [2024-07-24 20:08:59.973451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.093 [2024-07-24 20:08:59.973458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.093 [2024-07-24 20:08:59.973474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-07-24 20:08:59.983338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.093 [2024-07-24 20:08:59.983419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.093 [2024-07-24 20:08:59.983436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.093 [2024-07-24 20:08:59.983444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.093 [2024-07-24 20:08:59.983450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.093 [2024-07-24 20:08:59.983466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-07-24 20:08:59.993344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.093 [2024-07-24 20:08:59.993446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.093 [2024-07-24 20:08:59.993464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.093 [2024-07-24 20:08:59.993471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.093 [2024-07-24 20:08:59.993478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.093 [2024-07-24 20:08:59.993493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-07-24 20:09:00.003300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.093 [2024-07-24 20:09:00.003394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.093 [2024-07-24 20:09:00.003412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.093 [2024-07-24 20:09:00.003420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.093 [2024-07-24 20:09:00.003426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.093 [2024-07-24 20:09:00.003442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-07-24 20:09:00.013528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.093 [2024-07-24 20:09:00.013615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.093 [2024-07-24 20:09:00.013636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.093 [2024-07-24 20:09:00.013643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.093 [2024-07-24 20:09:00.013650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.093 [2024-07-24 20:09:00.013665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-07-24 20:09:00.023506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.093 [2024-07-24 20:09:00.023599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.093 [2024-07-24 20:09:00.023621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.093 [2024-07-24 20:09:00.023629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.093 [2024-07-24 20:09:00.023636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.094 [2024-07-24 20:09:00.023655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.094 qpair failed and we were unable to recover it. 00:29:12.094 [2024-07-24 20:09:00.033525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.094 [2024-07-24 20:09:00.033616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.094 [2024-07-24 20:09:00.033635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.094 [2024-07-24 20:09:00.033643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.094 [2024-07-24 20:09:00.033649] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.094 [2024-07-24 20:09:00.033666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.094 qpair failed and we were unable to recover it. 00:29:12.094 [2024-07-24 20:09:00.043515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.094 [2024-07-24 20:09:00.043652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.094 [2024-07-24 20:09:00.043670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.094 [2024-07-24 20:09:00.043678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.094 [2024-07-24 20:09:00.043685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.094 [2024-07-24 20:09:00.043701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.094 qpair failed and we were unable to recover it. 00:29:12.356 [2024-07-24 20:09:00.053485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.356 [2024-07-24 20:09:00.053577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.356 [2024-07-24 20:09:00.053594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.356 [2024-07-24 20:09:00.053603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.356 [2024-07-24 20:09:00.053610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.356 [2024-07-24 20:09:00.053631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.356 qpair failed and we were unable to recover it. 00:29:12.356 [2024-07-24 20:09:00.063559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.357 [2024-07-24 20:09:00.063644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.357 [2024-07-24 20:09:00.063661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.357 [2024-07-24 20:09:00.063669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.357 [2024-07-24 20:09:00.063676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.357 [2024-07-24 20:09:00.063691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.357 qpair failed and we were unable to recover it. 00:29:12.357 [2024-07-24 20:09:00.073661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.357 [2024-07-24 20:09:00.073798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.357 [2024-07-24 20:09:00.073828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.357 [2024-07-24 20:09:00.073840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.357 [2024-07-24 20:09:00.073850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.357 [2024-07-24 20:09:00.073874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.357 qpair failed and we were unable to recover it. 00:29:12.357 [2024-07-24 20:09:00.083627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.357 [2024-07-24 20:09:00.083718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.357 [2024-07-24 20:09:00.083737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.357 [2024-07-24 20:09:00.083745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.357 [2024-07-24 20:09:00.083752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.357 [2024-07-24 20:09:00.083769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.357 qpair failed and we were unable to recover it. 00:29:12.357 [2024-07-24 20:09:00.093643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.357 [2024-07-24 20:09:00.093749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.357 [2024-07-24 20:09:00.093766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.357 [2024-07-24 20:09:00.093775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.357 [2024-07-24 20:09:00.093782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.357 [2024-07-24 20:09:00.093798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.357 qpair failed and we were unable to recover it. 00:29:12.357 [2024-07-24 20:09:00.103687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.357 [2024-07-24 20:09:00.103771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.357 [2024-07-24 20:09:00.103792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.357 [2024-07-24 20:09:00.103800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.357 [2024-07-24 20:09:00.103806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.357 [2024-07-24 20:09:00.103821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.357 qpair failed and we were unable to recover it. 00:29:12.357 [2024-07-24 20:09:00.113602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.357 [2024-07-24 20:09:00.113795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.357 [2024-07-24 20:09:00.113812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.357 [2024-07-24 20:09:00.113819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.357 [2024-07-24 20:09:00.113826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.357 [2024-07-24 20:09:00.113841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.357 qpair failed and we were unable to recover it. 00:29:12.357 [2024-07-24 20:09:00.123763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.357 [2024-07-24 20:09:00.123853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.357 [2024-07-24 20:09:00.123869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.357 [2024-07-24 20:09:00.123877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.357 [2024-07-24 20:09:00.123884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.357 [2024-07-24 20:09:00.123900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.357 qpair failed and we were unable to recover it. 00:29:12.357 [2024-07-24 20:09:00.133791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.357 [2024-07-24 20:09:00.133890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.357 [2024-07-24 20:09:00.133916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.357 [2024-07-24 20:09:00.133926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.357 [2024-07-24 20:09:00.133933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.357 [2024-07-24 20:09:00.133953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.357 qpair failed and we were unable to recover it. 00:29:12.357 [2024-07-24 20:09:00.143827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.357 [2024-07-24 20:09:00.143920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.357 [2024-07-24 20:09:00.143946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.357 [2024-07-24 20:09:00.143955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.357 [2024-07-24 20:09:00.143962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.357 [2024-07-24 20:09:00.143987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.357 qpair failed and we were unable to recover it. 00:29:12.357 [2024-07-24 20:09:00.153806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.357 [2024-07-24 20:09:00.153897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.357 [2024-07-24 20:09:00.153923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.357 [2024-07-24 20:09:00.153932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.357 [2024-07-24 20:09:00.153939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.357 [2024-07-24 20:09:00.153959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.357 qpair failed and we were unable to recover it. 00:29:12.357 [2024-07-24 20:09:00.163876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.357 [2024-07-24 20:09:00.164076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.357 [2024-07-24 20:09:00.164094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.357 [2024-07-24 20:09:00.164101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.357 [2024-07-24 20:09:00.164108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.357 [2024-07-24 20:09:00.164125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.357 qpair failed and we were unable to recover it. 00:29:12.357 [2024-07-24 20:09:00.173889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.357 [2024-07-24 20:09:00.173977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.357 [2024-07-24 20:09:00.173994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.357 [2024-07-24 20:09:00.174002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.357 [2024-07-24 20:09:00.174008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.357 [2024-07-24 20:09:00.174024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.357 qpair failed and we were unable to recover it. 00:29:12.357 [2024-07-24 20:09:00.183903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.357 [2024-07-24 20:09:00.183986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.357 [2024-07-24 20:09:00.184002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.357 [2024-07-24 20:09:00.184010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.357 [2024-07-24 20:09:00.184017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.357 [2024-07-24 20:09:00.184032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.357 qpair failed and we were unable to recover it. 00:29:12.357 [2024-07-24 20:09:00.193812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.357 [2024-07-24 20:09:00.193907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.358 [2024-07-24 20:09:00.193924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.358 [2024-07-24 20:09:00.193931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.358 [2024-07-24 20:09:00.193938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.358 [2024-07-24 20:09:00.193953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.358 qpair failed and we were unable to recover it. 00:29:12.358 [2024-07-24 20:09:00.203965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.358 [2024-07-24 20:09:00.204051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.358 [2024-07-24 20:09:00.204067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.358 [2024-07-24 20:09:00.204075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.358 [2024-07-24 20:09:00.204082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.358 [2024-07-24 20:09:00.204097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.358 qpair failed and we were unable to recover it. 00:29:12.358 [2024-07-24 20:09:00.214000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.358 [2024-07-24 20:09:00.214088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.358 [2024-07-24 20:09:00.214105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.358 [2024-07-24 20:09:00.214112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.358 [2024-07-24 20:09:00.214119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.358 [2024-07-24 20:09:00.214134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.358 qpair failed and we were unable to recover it. 00:29:12.358 [2024-07-24 20:09:00.223987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.358 [2024-07-24 20:09:00.224072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.358 [2024-07-24 20:09:00.224089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.358 [2024-07-24 20:09:00.224096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.358 [2024-07-24 20:09:00.224102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.358 [2024-07-24 20:09:00.224117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.358 qpair failed and we were unable to recover it. 00:29:12.358 [2024-07-24 20:09:00.234057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.358 [2024-07-24 20:09:00.234143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.358 [2024-07-24 20:09:00.234160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.358 [2024-07-24 20:09:00.234168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.358 [2024-07-24 20:09:00.234179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.358 [2024-07-24 20:09:00.234194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.358 qpair failed and we were unable to recover it. 00:29:12.358 [2024-07-24 20:09:00.244092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.358 [2024-07-24 20:09:00.244184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.358 [2024-07-24 20:09:00.244203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.358 [2024-07-24 20:09:00.244211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.358 [2024-07-24 20:09:00.244218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.358 [2024-07-24 20:09:00.244233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.358 qpair failed and we were unable to recover it. 00:29:12.358 [2024-07-24 20:09:00.254165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.358 [2024-07-24 20:09:00.254252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.358 [2024-07-24 20:09:00.254269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.358 [2024-07-24 20:09:00.254278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.358 [2024-07-24 20:09:00.254286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.358 [2024-07-24 20:09:00.254301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.358 qpair failed and we were unable to recover it. 00:29:12.358 [2024-07-24 20:09:00.264005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.358 [2024-07-24 20:09:00.264093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.358 [2024-07-24 20:09:00.264110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.358 [2024-07-24 20:09:00.264118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.358 [2024-07-24 20:09:00.264124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.358 [2024-07-24 20:09:00.264140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.358 qpair failed and we were unable to recover it. 00:29:12.358 [2024-07-24 20:09:00.274154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.358 [2024-07-24 20:09:00.274247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.358 [2024-07-24 20:09:00.274264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.358 [2024-07-24 20:09:00.274272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.358 [2024-07-24 20:09:00.274279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.358 [2024-07-24 20:09:00.274294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.358 qpair failed and we were unable to recover it. 00:29:12.358 [2024-07-24 20:09:00.284195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.358 [2024-07-24 20:09:00.284305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.358 [2024-07-24 20:09:00.284322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.358 [2024-07-24 20:09:00.284330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.358 [2024-07-24 20:09:00.284337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.358 [2024-07-24 20:09:00.284352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.358 qpair failed and we were unable to recover it. 00:29:12.358 [2024-07-24 20:09:00.294212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.358 [2024-07-24 20:09:00.294291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.358 [2024-07-24 20:09:00.294307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.358 [2024-07-24 20:09:00.294315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.358 [2024-07-24 20:09:00.294323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.358 [2024-07-24 20:09:00.294339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.358 qpair failed and we were unable to recover it. 00:29:12.358 [2024-07-24 20:09:00.304225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.358 [2024-07-24 20:09:00.304319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.358 [2024-07-24 20:09:00.304336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.358 [2024-07-24 20:09:00.304344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.358 [2024-07-24 20:09:00.304351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.358 [2024-07-24 20:09:00.304366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.358 qpair failed and we were unable to recover it. 00:29:12.622 [2024-07-24 20:09:00.314271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.622 [2024-07-24 20:09:00.314358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.622 [2024-07-24 20:09:00.314374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.622 [2024-07-24 20:09:00.314382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.622 [2024-07-24 20:09:00.314389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.622 [2024-07-24 20:09:00.314404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-07-24 20:09:00.324254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.622 [2024-07-24 20:09:00.324345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.622 [2024-07-24 20:09:00.324361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.622 [2024-07-24 20:09:00.324373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.622 [2024-07-24 20:09:00.324379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.622 [2024-07-24 20:09:00.324395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-07-24 20:09:00.334277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.622 [2024-07-24 20:09:00.334367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.622 [2024-07-24 20:09:00.334384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.622 [2024-07-24 20:09:00.334391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.622 [2024-07-24 20:09:00.334397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.622 [2024-07-24 20:09:00.334413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-07-24 20:09:00.344220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.622 [2024-07-24 20:09:00.344324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.622 [2024-07-24 20:09:00.344342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.622 [2024-07-24 20:09:00.344349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.622 [2024-07-24 20:09:00.344356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.622 [2024-07-24 20:09:00.344372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-07-24 20:09:00.354327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.622 [2024-07-24 20:09:00.354410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.622 [2024-07-24 20:09:00.354426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.622 [2024-07-24 20:09:00.354435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.622 [2024-07-24 20:09:00.354441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.622 [2024-07-24 20:09:00.354457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-07-24 20:09:00.364410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.622 [2024-07-24 20:09:00.364503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.622 [2024-07-24 20:09:00.364519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.622 [2024-07-24 20:09:00.364527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.622 [2024-07-24 20:09:00.364534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.622 [2024-07-24 20:09:00.364550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-07-24 20:09:00.374429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.622 [2024-07-24 20:09:00.374513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.622 [2024-07-24 20:09:00.374530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.622 [2024-07-24 20:09:00.374538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.622 [2024-07-24 20:09:00.374545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.622 [2024-07-24 20:09:00.374560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-07-24 20:09:00.384446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.622 [2024-07-24 20:09:00.384541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.622 [2024-07-24 20:09:00.384558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.623 [2024-07-24 20:09:00.384566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.623 [2024-07-24 20:09:00.384573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.623 [2024-07-24 20:09:00.384588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-07-24 20:09:00.394486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.623 [2024-07-24 20:09:00.394573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.623 [2024-07-24 20:09:00.394589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.623 [2024-07-24 20:09:00.394598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.623 [2024-07-24 20:09:00.394604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.623 [2024-07-24 20:09:00.394620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-07-24 20:09:00.404513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.623 [2024-07-24 20:09:00.404612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.623 [2024-07-24 20:09:00.404629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.623 [2024-07-24 20:09:00.404637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.623 [2024-07-24 20:09:00.404643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.623 [2024-07-24 20:09:00.404658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-07-24 20:09:00.414520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.623 [2024-07-24 20:09:00.414604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.623 [2024-07-24 20:09:00.414624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.623 [2024-07-24 20:09:00.414632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.623 [2024-07-24 20:09:00.414638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.623 [2024-07-24 20:09:00.414653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-07-24 20:09:00.424499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.623 [2024-07-24 20:09:00.424624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.623 [2024-07-24 20:09:00.424641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.623 [2024-07-24 20:09:00.424648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.623 [2024-07-24 20:09:00.424655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.623 [2024-07-24 20:09:00.424670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-07-24 20:09:00.434463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.623 [2024-07-24 20:09:00.434545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.623 [2024-07-24 20:09:00.434561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.623 [2024-07-24 20:09:00.434568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.623 [2024-07-24 20:09:00.434575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.623 [2024-07-24 20:09:00.434589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-07-24 20:09:00.444624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.623 [2024-07-24 20:09:00.444708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.623 [2024-07-24 20:09:00.444725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.623 [2024-07-24 20:09:00.444733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.623 [2024-07-24 20:09:00.444740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.623 [2024-07-24 20:09:00.444755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-07-24 20:09:00.454607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.623 [2024-07-24 20:09:00.454691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.623 [2024-07-24 20:09:00.454707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.623 [2024-07-24 20:09:00.454715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.623 [2024-07-24 20:09:00.454722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.623 [2024-07-24 20:09:00.454744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-07-24 20:09:00.464668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.623 [2024-07-24 20:09:00.464761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.623 [2024-07-24 20:09:00.464778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.623 [2024-07-24 20:09:00.464785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.623 [2024-07-24 20:09:00.464792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.623 [2024-07-24 20:09:00.464807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-07-24 20:09:00.474702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.623 [2024-07-24 20:09:00.474792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.623 [2024-07-24 20:09:00.474809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.623 [2024-07-24 20:09:00.474816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.623 [2024-07-24 20:09:00.474823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.623 [2024-07-24 20:09:00.474838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-07-24 20:09:00.484717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.623 [2024-07-24 20:09:00.484808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.623 [2024-07-24 20:09:00.484824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.623 [2024-07-24 20:09:00.484832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.623 [2024-07-24 20:09:00.484838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.623 [2024-07-24 20:09:00.484853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-07-24 20:09:00.494760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.623 [2024-07-24 20:09:00.494854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.623 [2024-07-24 20:09:00.494871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.623 [2024-07-24 20:09:00.494879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.623 [2024-07-24 20:09:00.494885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.623 [2024-07-24 20:09:00.494900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-07-24 20:09:00.504780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.623 [2024-07-24 20:09:00.504876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.623 [2024-07-24 20:09:00.504896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.623 [2024-07-24 20:09:00.504904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.623 [2024-07-24 20:09:00.504910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.623 [2024-07-24 20:09:00.504925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-07-24 20:09:00.514829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.624 [2024-07-24 20:09:00.514928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.624 [2024-07-24 20:09:00.514945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.624 [2024-07-24 20:09:00.514952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.624 [2024-07-24 20:09:00.514958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.624 [2024-07-24 20:09:00.514974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.624 qpair failed and we were unable to recover it. 00:29:12.624 [2024-07-24 20:09:00.524816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.624 [2024-07-24 20:09:00.524916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.624 [2024-07-24 20:09:00.524943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.624 [2024-07-24 20:09:00.524952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.624 [2024-07-24 20:09:00.524960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.624 [2024-07-24 20:09:00.524980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.624 qpair failed and we were unable to recover it. 00:29:12.624 [2024-07-24 20:09:00.534824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.624 [2024-07-24 20:09:00.534910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.624 [2024-07-24 20:09:00.534929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.624 [2024-07-24 20:09:00.534937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.624 [2024-07-24 20:09:00.534944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.624 [2024-07-24 20:09:00.534961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.624 qpair failed and we were unable to recover it. 00:29:12.624 [2024-07-24 20:09:00.544886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.624 [2024-07-24 20:09:00.544969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.624 [2024-07-24 20:09:00.544986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.624 [2024-07-24 20:09:00.544994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.624 [2024-07-24 20:09:00.545001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.624 [2024-07-24 20:09:00.545021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.624 qpair failed and we were unable to recover it. 00:29:12.624 [2024-07-24 20:09:00.554895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.624 [2024-07-24 20:09:00.554980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.624 [2024-07-24 20:09:00.554996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.624 [2024-07-24 20:09:00.555004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.624 [2024-07-24 20:09:00.555011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.624 [2024-07-24 20:09:00.555027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.624 qpair failed and we were unable to recover it. 00:29:12.624 [2024-07-24 20:09:00.564967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.624 [2024-07-24 20:09:00.565163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.624 [2024-07-24 20:09:00.565190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.624 [2024-07-24 20:09:00.565198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.624 [2024-07-24 20:09:00.565210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.624 [2024-07-24 20:09:00.565230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.624 qpair failed and we were unable to recover it. 00:29:12.886 [2024-07-24 20:09:00.574964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.886 [2024-07-24 20:09:00.575049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.886 [2024-07-24 20:09:00.575068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.886 [2024-07-24 20:09:00.575076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.886 [2024-07-24 20:09:00.575083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.886 [2024-07-24 20:09:00.575101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.886 qpair failed and we were unable to recover it. 00:29:12.886 [2024-07-24 20:09:00.584964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.886 [2024-07-24 20:09:00.585052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.886 [2024-07-24 20:09:00.585069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.886 [2024-07-24 20:09:00.585076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.886 [2024-07-24 20:09:00.585083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.886 [2024-07-24 20:09:00.585099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.886 qpair failed and we were unable to recover it. 00:29:12.886 [2024-07-24 20:09:00.595026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.886 [2024-07-24 20:09:00.595113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.886 [2024-07-24 20:09:00.595135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.886 [2024-07-24 20:09:00.595142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.886 [2024-07-24 20:09:00.595149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.886 [2024-07-24 20:09:00.595164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.886 qpair failed and we were unable to recover it. 00:29:12.886 [2024-07-24 20:09:00.605009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.887 [2024-07-24 20:09:00.605088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.887 [2024-07-24 20:09:00.605105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.887 [2024-07-24 20:09:00.605112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.887 [2024-07-24 20:09:00.605119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.887 [2024-07-24 20:09:00.605134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-24 20:09:00.615053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.887 [2024-07-24 20:09:00.615140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.887 [2024-07-24 20:09:00.615157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.887 [2024-07-24 20:09:00.615164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.887 [2024-07-24 20:09:00.615170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.887 [2024-07-24 20:09:00.615186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-24 20:09:00.625101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.887 [2024-07-24 20:09:00.625204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.887 [2024-07-24 20:09:00.625221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.887 [2024-07-24 20:09:00.625229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.887 [2024-07-24 20:09:00.625236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.887 [2024-07-24 20:09:00.625252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-24 20:09:00.635130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.887 [2024-07-24 20:09:00.635315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.887 [2024-07-24 20:09:00.635334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.887 [2024-07-24 20:09:00.635341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.887 [2024-07-24 20:09:00.635352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.887 [2024-07-24 20:09:00.635368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-24 20:09:00.645160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.887 [2024-07-24 20:09:00.645250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.887 [2024-07-24 20:09:00.645267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.887 [2024-07-24 20:09:00.645275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.887 [2024-07-24 20:09:00.645281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.887 [2024-07-24 20:09:00.645297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-24 20:09:00.655049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.887 [2024-07-24 20:09:00.655140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.887 [2024-07-24 20:09:00.655158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.887 [2024-07-24 20:09:00.655166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.887 [2024-07-24 20:09:00.655172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.887 [2024-07-24 20:09:00.655187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-24 20:09:00.665212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.887 [2024-07-24 20:09:00.665296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.887 [2024-07-24 20:09:00.665312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.887 [2024-07-24 20:09:00.665320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.887 [2024-07-24 20:09:00.665327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.887 [2024-07-24 20:09:00.665342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-24 20:09:00.675244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.887 [2024-07-24 20:09:00.675328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.887 [2024-07-24 20:09:00.675345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.887 [2024-07-24 20:09:00.675352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.887 [2024-07-24 20:09:00.675359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.887 [2024-07-24 20:09:00.675374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-24 20:09:00.685303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.887 [2024-07-24 20:09:00.685421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.887 [2024-07-24 20:09:00.685438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.887 [2024-07-24 20:09:00.685446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.887 [2024-07-24 20:09:00.685453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.887 [2024-07-24 20:09:00.685468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-24 20:09:00.695292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.887 [2024-07-24 20:09:00.695378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.887 [2024-07-24 20:09:00.695395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.887 [2024-07-24 20:09:00.695403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.887 [2024-07-24 20:09:00.695410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.887 [2024-07-24 20:09:00.695426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-24 20:09:00.705321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.887 [2024-07-24 20:09:00.705406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.887 [2024-07-24 20:09:00.705423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.887 [2024-07-24 20:09:00.705431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.887 [2024-07-24 20:09:00.705438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.887 [2024-07-24 20:09:00.705454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-24 20:09:00.715351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.887 [2024-07-24 20:09:00.715441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.887 [2024-07-24 20:09:00.715459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.887 [2024-07-24 20:09:00.715467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.887 [2024-07-24 20:09:00.715473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.887 [2024-07-24 20:09:00.715488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.887 qpair failed and we were unable to recover it. 00:29:12.887 [2024-07-24 20:09:00.725378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.887 [2024-07-24 20:09:00.725505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.887 [2024-07-24 20:09:00.725522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.887 [2024-07-24 20:09:00.725533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.888 [2024-07-24 20:09:00.725539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.888 [2024-07-24 20:09:00.725554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-24 20:09:00.735385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.888 [2024-07-24 20:09:00.735493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.888 [2024-07-24 20:09:00.735510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.888 [2024-07-24 20:09:00.735517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.888 [2024-07-24 20:09:00.735524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.888 [2024-07-24 20:09:00.735539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-24 20:09:00.745320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.888 [2024-07-24 20:09:00.745406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.888 [2024-07-24 20:09:00.745423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.888 [2024-07-24 20:09:00.745430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.888 [2024-07-24 20:09:00.745436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.888 [2024-07-24 20:09:00.745452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-24 20:09:00.755461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.888 [2024-07-24 20:09:00.755544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.888 [2024-07-24 20:09:00.755560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.888 [2024-07-24 20:09:00.755568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.888 [2024-07-24 20:09:00.755574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.888 [2024-07-24 20:09:00.755589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-24 20:09:00.765460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.888 [2024-07-24 20:09:00.765548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.888 [2024-07-24 20:09:00.765564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.888 [2024-07-24 20:09:00.765572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.888 [2024-07-24 20:09:00.765578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.888 [2024-07-24 20:09:00.765595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-24 20:09:00.775500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.888 [2024-07-24 20:09:00.775585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.888 [2024-07-24 20:09:00.775602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.888 [2024-07-24 20:09:00.775609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.888 [2024-07-24 20:09:00.775615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.888 [2024-07-24 20:09:00.775630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-24 20:09:00.785548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.888 [2024-07-24 20:09:00.785628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.888 [2024-07-24 20:09:00.785644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.888 [2024-07-24 20:09:00.785652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.888 [2024-07-24 20:09:00.785659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.888 [2024-07-24 20:09:00.785674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-24 20:09:00.795563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.888 [2024-07-24 20:09:00.795648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.888 [2024-07-24 20:09:00.795666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.888 [2024-07-24 20:09:00.795673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.888 [2024-07-24 20:09:00.795680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.888 [2024-07-24 20:09:00.795695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-24 20:09:00.805577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.888 [2024-07-24 20:09:00.805664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.888 [2024-07-24 20:09:00.805680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.888 [2024-07-24 20:09:00.805688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.888 [2024-07-24 20:09:00.805695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.888 [2024-07-24 20:09:00.805711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-24 20:09:00.815501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.888 [2024-07-24 20:09:00.815585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.888 [2024-07-24 20:09:00.815603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.888 [2024-07-24 20:09:00.815614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.888 [2024-07-24 20:09:00.815621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.888 [2024-07-24 20:09:00.815638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-24 20:09:00.825585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.888 [2024-07-24 20:09:00.825673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.888 [2024-07-24 20:09:00.825690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.888 [2024-07-24 20:09:00.825698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.888 [2024-07-24 20:09:00.825704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.888 [2024-07-24 20:09:00.825720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.888 qpair failed and we were unable to recover it. 00:29:12.888 [2024-07-24 20:09:00.835801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.888 [2024-07-24 20:09:00.835889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.888 [2024-07-24 20:09:00.835906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.888 [2024-07-24 20:09:00.835913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.888 [2024-07-24 20:09:00.835919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:12.888 [2024-07-24 20:09:00.835935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.888 qpair failed and we were unable to recover it. 00:29:13.150 [2024-07-24 20:09:00.845665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.150 [2024-07-24 20:09:00.845765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.150 [2024-07-24 20:09:00.845791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.150 [2024-07-24 20:09:00.845800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.150 [2024-07-24 20:09:00.845808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:13.150 [2024-07-24 20:09:00.845828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:13.150 qpair failed and we were unable to recover it. 00:29:13.150 [2024-07-24 20:09:00.855700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.150 [2024-07-24 20:09:00.855790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.150 [2024-07-24 20:09:00.855816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.150 [2024-07-24 20:09:00.855825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.150 [2024-07-24 20:09:00.855833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:13.150 [2024-07-24 20:09:00.855853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:13.150 qpair failed and we were unable to recover it. 00:29:13.150 [2024-07-24 20:09:00.865767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.150 [2024-07-24 20:09:00.865852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.150 [2024-07-24 20:09:00.865871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.150 [2024-07-24 20:09:00.865879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.150 [2024-07-24 20:09:00.865885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:13.150 [2024-07-24 20:09:00.865903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:13.150 qpair failed and we were unable to recover it. 00:29:13.150 [2024-07-24 20:09:00.875661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.151 [2024-07-24 20:09:00.875757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.151 [2024-07-24 20:09:00.875783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.151 [2024-07-24 20:09:00.875793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.151 [2024-07-24 20:09:00.875800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:13.151 [2024-07-24 20:09:00.875820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 [2024-07-24 20:09:00.885770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.151 [2024-07-24 20:09:00.885868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.151 [2024-07-24 20:09:00.885894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.151 [2024-07-24 20:09:00.885904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.151 [2024-07-24 20:09:00.885910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:13.151 [2024-07-24 20:09:00.885931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 [2024-07-24 20:09:00.895796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.151 [2024-07-24 20:09:00.895890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.151 [2024-07-24 20:09:00.895917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.151 [2024-07-24 20:09:00.895926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.151 [2024-07-24 20:09:00.895933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3e4000b90 00:29:13.151 [2024-07-24 20:09:00.895953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Write completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Write completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Write completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Write completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Write completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Write completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Write completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Write completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Write completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Write completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Write completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Write completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Write completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Write completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 [2024-07-24 20:09:00.896888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.151 [2024-07-24 20:09:00.905812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.151 [2024-07-24 20:09:00.905893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.151 [2024-07-24 20:09:00.905914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.151 [2024-07-24 20:09:00.905920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.151 [2024-07-24 20:09:00.905925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3ec000b90 00:29:13.151 [2024-07-24 20:09:00.905940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 [2024-07-24 20:09:00.915771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.151 [2024-07-24 20:09:00.915844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.151 [2024-07-24 20:09:00.915863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.151 [2024-07-24 20:09:00.915870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.151 [2024-07-24 20:09:00.915876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3ec000b90 00:29:13.151 [2024-07-24 20:09:00.915891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Write completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Write completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Write completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Write completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Write completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Write completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Write completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.151 Read completed with error (sct=0, sc=8) 00:29:13.151 starting I/O failed 00:29:13.152 Read completed with error (sct=0, sc=8) 00:29:13.152 starting I/O failed 00:29:13.152 [2024-07-24 20:09:00.916285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.152 [2024-07-24 20:09:00.926037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.152 [2024-07-24 20:09:00.926269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.152 [2024-07-24 20:09:00.926337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.152 [2024-07-24 20:09:00.926364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.152 [2024-07-24 20:09:00.926384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3f4000b90 00:29:13.152 [2024-07-24 20:09:00.926439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.152 qpair failed and we were unable to recover it. 00:29:13.152 [2024-07-24 20:09:00.936010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.152 [2024-07-24 20:09:00.936191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.152 [2024-07-24 20:09:00.936232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.152 [2024-07-24 20:09:00.936248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.152 [2024-07-24 20:09:00.936261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe3f4000b90 00:29:13.152 [2024-07-24 20:09:00.936294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.152 qpair failed and we were unable to recover it. 00:29:13.152 [2024-07-24 20:09:00.937009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe05f20 is same with the state(5) to be set 00:29:13.152 [2024-07-24 20:09:00.945861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.152 [2024-07-24 20:09:00.945952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.152 [2024-07-24 20:09:00.945978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.152 [2024-07-24 20:09:00.945992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.152 [2024-07-24 20:09:00.946000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf8220 00:29:13.152 [2024-07-24 20:09:00.946021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.152 qpair failed and we were unable to recover it. 00:29:13.152 [2024-07-24 20:09:00.955965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.152 [2024-07-24 20:09:00.956058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.152 [2024-07-24 20:09:00.956084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.152 [2024-07-24 20:09:00.956094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.152 [2024-07-24 20:09:00.956101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xdf8220 00:29:13.152 [2024-07-24 20:09:00.956121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.152 qpair failed and we were unable to recover it. 00:29:13.152 [2024-07-24 20:09:00.956520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe05f20 (9): Bad file descriptor 00:29:13.152 Initializing NVMe Controllers 00:29:13.152 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:13.152 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:13.152 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:13.152 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:13.152 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:13.152 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:13.152 Initialization complete. Launching workers. 00:29:13.152 Starting thread on core 1 00:29:13.152 Starting thread on core 2 00:29:13.152 Starting thread on core 3 00:29:13.152 Starting thread on core 0 00:29:13.152 20:09:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:13.152 00:29:13.152 real 0m11.433s 00:29:13.152 user 0m20.564s 00:29:13.152 sys 0m4.129s 00:29:13.152 20:09:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:13.152 20:09:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:13.152 ************************************ 00:29:13.152 END TEST nvmf_target_disconnect_tc2 00:29:13.152 ************************************ 00:29:13.152 20:09:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:13.152 20:09:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:13.152 20:09:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:13.152 20:09:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:13.152 20:09:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:29:13.152 20:09:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:13.152 20:09:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:29:13.152 20:09:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:13.152 20:09:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:13.152 rmmod nvme_tcp 00:29:13.152 rmmod nvme_fabrics 00:29:13.152 rmmod nvme_keyring 00:29:13.152 20:09:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:13.152 20:09:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:29:13.152 20:09:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:29:13.152 20:09:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3862490 ']' 00:29:13.152 20:09:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3862490 00:29:13.152 20:09:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 3862490 ']' 00:29:13.152 20:09:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 3862490 00:29:13.152 20:09:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:29:13.152 20:09:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:13.152 20:09:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3862490 00:29:13.413 20:09:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:29:13.413 20:09:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:29:13.413 20:09:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3862490' 00:29:13.413 killing process with pid 3862490 00:29:13.413 20:09:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 3862490 00:29:13.413 20:09:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 3862490 00:29:13.413 20:09:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:13.413 20:09:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:13.413 20:09:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:13.413 20:09:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:13.413 20:09:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:13.413 20:09:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:13.413 20:09:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:13.413 20:09:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:15.958 20:09:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:15.958 00:29:15.958 real 0m21.418s 00:29:15.958 user 0m48.500s 00:29:15.958 sys 0m9.963s 00:29:15.958 20:09:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:15.958 20:09:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:15.958 ************************************ 00:29:15.958 END TEST nvmf_target_disconnect 00:29:15.958 ************************************ 00:29:15.958 20:09:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:15.959 00:29:15.959 real 6m17.216s 00:29:15.959 user 11m5.302s 00:29:15.959 sys 2m6.397s 00:29:15.959 20:09:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:15.959 20:09:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.959 ************************************ 00:29:15.959 END TEST nvmf_host 00:29:15.959 ************************************ 00:29:15.959 00:29:15.959 real 22m42.728s 00:29:15.959 user 47m13.562s 00:29:15.959 sys 7m13.986s 00:29:15.959 20:09:03 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:15.959 20:09:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:15.959 ************************************ 00:29:15.959 END TEST nvmf_tcp 00:29:15.959 ************************************ 00:29:15.959 20:09:03 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:29:15.959 20:09:03 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:15.959 20:09:03 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:15.959 20:09:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:15.959 20:09:03 -- common/autotest_common.sh@10 -- # set +x 00:29:15.959 ************************************ 00:29:15.959 START TEST spdkcli_nvmf_tcp 00:29:15.959 ************************************ 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:15.959 * Looking for test storage... 00:29:15.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3864477 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3864477 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 3864477 ']' 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:15.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:15.959 20:09:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:15.959 [2024-07-24 20:09:03.705579] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:29:15.959 [2024-07-24 20:09:03.705639] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3864477 ] 00:29:15.959 EAL: No free 2048 kB hugepages reported on node 1 00:29:15.959 [2024-07-24 20:09:03.764106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:15.959 [2024-07-24 20:09:03.830320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:15.959 [2024-07-24 20:09:03.830449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:16.530 20:09:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:16.530 20:09:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:29:16.530 20:09:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:16.530 20:09:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:16.530 20:09:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:16.791 20:09:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:16.791 20:09:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:16.791 20:09:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:16.791 20:09:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:16.791 20:09:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:16.791 20:09:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:16.791 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:16.791 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:16.791 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:16.791 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:16.791 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:16.791 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:16.791 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:16.791 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:16.791 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:16.791 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:16.791 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:16.791 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:16.791 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:16.791 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:16.791 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:16.791 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:16.791 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:16.791 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:16.791 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:16.791 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:16.791 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:16.791 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:16.791 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:16.791 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:16.791 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:16.791 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:16.791 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:16.791 ' 00:29:19.333 [2024-07-24 20:09:06.831044] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:20.274 [2024-07-24 20:09:07.994807] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:22.820 [2024-07-24 20:09:10.293787] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:29:24.733 [2024-07-24 20:09:12.375905] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:29:26.176 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:26.176 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:26.176 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:26.176 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:26.176 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:26.176 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:26.176 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:26.176 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:26.176 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:26.176 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:26.176 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:26.176 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:26.176 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:26.176 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:26.176 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:26.176 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:26.176 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:26.176 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:26.176 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:26.176 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:26.176 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:26.176 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:26.176 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:26.177 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:29:26.177 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:26.177 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:26.177 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:26.177 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:26.177 20:09:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:26.177 20:09:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:26.177 20:09:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:26.177 20:09:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:26.177 20:09:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:26.177 20:09:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:26.177 20:09:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:29:26.177 20:09:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:29:26.748 20:09:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:26.748 20:09:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:26.748 20:09:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:26.748 20:09:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:26.748 20:09:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:26.748 20:09:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:26.748 20:09:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:26.748 20:09:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:26.748 20:09:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:26.748 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:26.748 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:26.748 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:26.748 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:29:26.748 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:29:26.748 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:26.748 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:26.748 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:26.748 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:26.748 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:26.748 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:26.748 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:26.748 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:26.748 ' 00:29:32.033 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:32.033 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:32.033 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:32.033 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:32.033 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:29:32.033 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:29:32.033 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:32.033 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:32.033 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:32.033 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:32.033 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:32.033 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:32.033 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:32.033 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:32.033 20:09:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:32.033 20:09:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:32.033 20:09:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:32.033 20:09:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3864477 00:29:32.033 20:09:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 3864477 ']' 00:29:32.033 20:09:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 3864477 00:29:32.033 20:09:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:29:32.033 20:09:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:32.033 20:09:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3864477 00:29:32.293 20:09:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:32.293 20:09:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:32.293 20:09:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3864477' 00:29:32.293 killing process with pid 3864477 00:29:32.293 20:09:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 3864477 00:29:32.293 20:09:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 3864477 00:29:32.293 20:09:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:29:32.293 20:09:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:29:32.293 20:09:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3864477 ']' 00:29:32.293 20:09:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3864477 00:29:32.293 20:09:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 3864477 ']' 00:29:32.293 20:09:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 3864477 00:29:32.293 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3864477) - No such process 00:29:32.293 20:09:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 3864477 is not found' 00:29:32.293 Process with pid 3864477 is not found 00:29:32.293 20:09:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:29:32.293 20:09:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:29:32.293 20:09:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:29:32.293 00:29:32.293 real 0m16.646s 00:29:32.293 user 0m35.865s 00:29:32.293 sys 0m0.819s 00:29:32.293 20:09:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:32.293 20:09:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:32.293 ************************************ 00:29:32.293 END TEST spdkcli_nvmf_tcp 00:29:32.293 ************************************ 00:29:32.293 20:09:20 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:32.293 20:09:20 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:32.293 20:09:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:32.293 20:09:20 -- common/autotest_common.sh@10 -- # set +x 00:29:32.293 ************************************ 00:29:32.293 START TEST nvmf_identify_passthru 00:29:32.293 ************************************ 00:29:32.293 20:09:20 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:32.554 * Looking for test storage... 00:29:32.554 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:32.554 20:09:20 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:32.554 20:09:20 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:29:32.554 20:09:20 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:32.554 20:09:20 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:32.554 20:09:20 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:32.554 20:09:20 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:32.554 20:09:20 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:32.554 20:09:20 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:32.554 20:09:20 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:32.554 20:09:20 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:32.554 20:09:20 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:32.554 20:09:20 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:32.554 20:09:20 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:32.554 20:09:20 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:32.554 20:09:20 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:32.555 20:09:20 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:32.555 20:09:20 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:32.555 20:09:20 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:32.555 20:09:20 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:32.555 20:09:20 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:32.555 20:09:20 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:32.555 20:09:20 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:32.555 20:09:20 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.555 20:09:20 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.555 20:09:20 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.555 20:09:20 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:32.555 20:09:20 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.555 20:09:20 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:29:32.555 20:09:20 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:32.555 20:09:20 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:32.555 20:09:20 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:32.555 20:09:20 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:32.555 20:09:20 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:32.555 20:09:20 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:32.555 20:09:20 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:32.555 20:09:20 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:32.555 20:09:20 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:32.555 20:09:20 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:32.555 20:09:20 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:32.555 20:09:20 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:32.555 20:09:20 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.555 20:09:20 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.555 20:09:20 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.555 20:09:20 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:32.555 20:09:20 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.555 20:09:20 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:29:32.555 20:09:20 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:32.555 20:09:20 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:32.555 20:09:20 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:32.555 20:09:20 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:32.555 20:09:20 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:32.555 20:09:20 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.555 20:09:20 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:32.555 20:09:20 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.555 20:09:20 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:32.555 20:09:20 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:32.555 20:09:20 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:29:32.555 20:09:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:40.695 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:40.695 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:29:40.695 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:40.695 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:40.696 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:40.696 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:40.696 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:40.696 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:40.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:40.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:29:40.696 00:29:40.696 --- 10.0.0.2 ping statistics --- 00:29:40.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:40.696 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:40.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:40.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.359 ms 00:29:40.696 00:29:40.696 --- 10.0.0.1 ping statistics --- 00:29:40.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:40.696 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:40.696 20:09:27 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:40.696 20:09:27 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:29:40.696 20:09:27 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:40.696 20:09:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:40.696 20:09:27 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:29:40.696 20:09:27 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:29:40.696 20:09:27 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:29:40.696 20:09:27 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:29:40.696 20:09:27 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:29:40.696 20:09:27 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:29:40.696 20:09:27 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:29:40.696 20:09:27 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:40.696 20:09:27 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:40.696 20:09:27 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:29:40.696 20:09:27 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:29:40.697 20:09:27 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:29:40.697 20:09:27 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:65:00.0 00:29:40.697 20:09:27 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:29:40.697 20:09:27 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:29:40.697 20:09:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:29:40.697 20:09:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:29:40.697 20:09:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:29:40.697 EAL: No free 2048 kB hugepages reported on node 1 00:29:40.697 20:09:28 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:29:40.697 20:09:28 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:29:40.697 20:09:28 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:29:40.697 20:09:28 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:29:40.697 EAL: No free 2048 kB hugepages reported on node 1 00:29:40.697 20:09:28 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:29:40.697 20:09:28 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:29:40.697 20:09:28 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:40.697 20:09:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:40.697 20:09:28 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:29:40.697 20:09:28 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:40.697 20:09:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:40.697 20:09:28 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3872128 00:29:40.697 20:09:28 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:40.697 20:09:28 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:40.697 20:09:28 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3872128 00:29:40.697 20:09:28 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 3872128 ']' 00:29:40.697 20:09:28 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:40.697 20:09:28 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:40.697 20:09:28 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:40.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:40.697 20:09:28 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:40.697 20:09:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:40.957 [2024-07-24 20:09:28.657425] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:29:40.957 [2024-07-24 20:09:28.657490] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:40.957 EAL: No free 2048 kB hugepages reported on node 1 00:29:40.957 [2024-07-24 20:09:28.725172] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:40.957 [2024-07-24 20:09:28.797009] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:40.957 [2024-07-24 20:09:28.797047] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:40.957 [2024-07-24 20:09:28.797055] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:40.957 [2024-07-24 20:09:28.797061] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:40.957 [2024-07-24 20:09:28.797066] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:40.957 [2024-07-24 20:09:28.797227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:40.957 [2024-07-24 20:09:28.797308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:40.957 [2024-07-24 20:09:28.797472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.957 [2024-07-24 20:09:28.797473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:41.527 20:09:29 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:41.527 20:09:29 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:29:41.527 20:09:29 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:29:41.527 20:09:29 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.527 20:09:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:41.527 INFO: Log level set to 20 00:29:41.527 INFO: Requests: 00:29:41.527 { 00:29:41.527 "jsonrpc": "2.0", 00:29:41.527 "method": "nvmf_set_config", 00:29:41.527 "id": 1, 00:29:41.527 "params": { 00:29:41.527 "admin_cmd_passthru": { 00:29:41.527 "identify_ctrlr": true 00:29:41.527 } 00:29:41.527 } 00:29:41.527 } 00:29:41.527 00:29:41.527 INFO: response: 00:29:41.527 { 00:29:41.527 "jsonrpc": "2.0", 00:29:41.527 "id": 1, 00:29:41.527 "result": true 00:29:41.527 } 00:29:41.527 00:29:41.527 20:09:29 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.527 20:09:29 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:29:41.527 20:09:29 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.528 20:09:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:41.528 INFO: Setting log level to 20 00:29:41.528 INFO: Setting log level to 20 00:29:41.528 INFO: Log level set to 20 00:29:41.528 INFO: Log level set to 20 00:29:41.528 INFO: Requests: 00:29:41.528 { 00:29:41.528 "jsonrpc": "2.0", 00:29:41.528 "method": "framework_start_init", 00:29:41.528 "id": 1 00:29:41.528 } 00:29:41.528 00:29:41.528 INFO: Requests: 00:29:41.528 { 00:29:41.528 "jsonrpc": "2.0", 00:29:41.528 "method": "framework_start_init", 00:29:41.528 "id": 1 00:29:41.528 } 00:29:41.528 00:29:41.788 [2024-07-24 20:09:29.505628] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:29:41.788 INFO: response: 00:29:41.788 { 00:29:41.788 "jsonrpc": "2.0", 00:29:41.788 "id": 1, 00:29:41.788 "result": true 00:29:41.788 } 00:29:41.788 00:29:41.788 INFO: response: 00:29:41.788 { 00:29:41.788 "jsonrpc": "2.0", 00:29:41.788 "id": 1, 00:29:41.788 "result": true 00:29:41.788 } 00:29:41.788 00:29:41.788 20:09:29 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.788 20:09:29 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:41.788 20:09:29 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.788 20:09:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:41.788 INFO: Setting log level to 40 00:29:41.788 INFO: Setting log level to 40 00:29:41.788 INFO: Setting log level to 40 00:29:41.788 [2024-07-24 20:09:29.518958] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:41.788 20:09:29 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.788 20:09:29 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:29:41.788 20:09:29 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:41.788 20:09:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:41.788 20:09:29 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:29:41.788 20:09:29 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.788 20:09:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:42.048 Nvme0n1 00:29:42.048 20:09:29 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.048 20:09:29 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:29:42.048 20:09:29 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.048 20:09:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:42.048 20:09:29 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.048 20:09:29 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:42.048 20:09:29 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.048 20:09:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:42.048 20:09:29 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.048 20:09:29 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:42.048 20:09:29 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.048 20:09:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:42.048 [2024-07-24 20:09:29.901603] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:42.048 20:09:29 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.048 20:09:29 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:29:42.048 20:09:29 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.048 20:09:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:42.048 [ 00:29:42.048 { 00:29:42.048 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:42.048 "subtype": "Discovery", 00:29:42.048 "listen_addresses": [], 00:29:42.048 "allow_any_host": true, 00:29:42.048 "hosts": [] 00:29:42.048 }, 00:29:42.048 { 00:29:42.048 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:42.048 "subtype": "NVMe", 00:29:42.048 "listen_addresses": [ 00:29:42.048 { 00:29:42.048 "trtype": "TCP", 00:29:42.048 "adrfam": "IPv4", 00:29:42.048 "traddr": "10.0.0.2", 00:29:42.048 "trsvcid": "4420" 00:29:42.048 } 00:29:42.048 ], 00:29:42.048 "allow_any_host": true, 00:29:42.048 "hosts": [], 00:29:42.048 "serial_number": "SPDK00000000000001", 00:29:42.048 "model_number": "SPDK bdev Controller", 00:29:42.048 "max_namespaces": 1, 00:29:42.048 "min_cntlid": 1, 00:29:42.048 "max_cntlid": 65519, 00:29:42.048 "namespaces": [ 00:29:42.048 { 00:29:42.048 "nsid": 1, 00:29:42.048 "bdev_name": "Nvme0n1", 00:29:42.048 "name": "Nvme0n1", 00:29:42.048 "nguid": "36344730526054870025384500000044", 00:29:42.048 "uuid": "36344730-5260-5487-0025-384500000044" 00:29:42.048 } 00:29:42.048 ] 00:29:42.048 } 00:29:42.048 ] 00:29:42.048 20:09:29 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.049 20:09:29 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:42.049 20:09:29 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:29:42.049 20:09:29 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:29:42.049 EAL: No free 2048 kB hugepages reported on node 1 00:29:42.308 20:09:30 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:29:42.308 20:09:30 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:42.308 20:09:30 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:29:42.308 20:09:30 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:29:42.308 EAL: No free 2048 kB hugepages reported on node 1 00:29:42.569 20:09:30 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:29:42.569 20:09:30 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:29:42.569 20:09:30 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:29:42.569 20:09:30 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:42.569 20:09:30 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.569 20:09:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:42.569 20:09:30 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.569 20:09:30 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:29:42.569 20:09:30 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:29:42.569 20:09:30 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:42.569 20:09:30 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:29:42.569 20:09:30 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:42.569 20:09:30 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:29:42.569 20:09:30 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:42.569 20:09:30 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:42.569 rmmod nvme_tcp 00:29:42.569 rmmod nvme_fabrics 00:29:42.569 rmmod nvme_keyring 00:29:42.569 20:09:30 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:42.569 20:09:30 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:29:42.569 20:09:30 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:29:42.569 20:09:30 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3872128 ']' 00:29:42.569 20:09:30 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3872128 00:29:42.569 20:09:30 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 3872128 ']' 00:29:42.569 20:09:30 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 3872128 00:29:42.569 20:09:30 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:29:42.569 20:09:30 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:42.569 20:09:30 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3872128 00:29:42.831 20:09:30 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:42.831 20:09:30 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:42.831 20:09:30 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3872128' 00:29:42.831 killing process with pid 3872128 00:29:42.831 20:09:30 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 3872128 00:29:42.831 20:09:30 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 3872128 00:29:43.092 20:09:30 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:43.092 20:09:30 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:43.092 20:09:30 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:43.092 20:09:30 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:43.092 20:09:30 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:43.092 20:09:30 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:43.092 20:09:30 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:43.092 20:09:30 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.005 20:09:32 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:45.005 00:29:45.005 real 0m12.654s 00:29:45.005 user 0m10.357s 00:29:45.005 sys 0m5.990s 00:29:45.005 20:09:32 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:45.005 20:09:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:45.005 ************************************ 00:29:45.005 END TEST nvmf_identify_passthru 00:29:45.005 ************************************ 00:29:45.005 20:09:32 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:29:45.005 20:09:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:45.005 20:09:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:45.005 20:09:32 -- common/autotest_common.sh@10 -- # set +x 00:29:45.267 ************************************ 00:29:45.267 START TEST nvmf_dif 00:29:45.267 ************************************ 00:29:45.267 20:09:32 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:29:45.267 * Looking for test storage... 00:29:45.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:45.267 20:09:33 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:45.267 20:09:33 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:29:45.267 20:09:33 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:45.267 20:09:33 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:45.267 20:09:33 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:45.267 20:09:33 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:45.267 20:09:33 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:45.267 20:09:33 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:45.267 20:09:33 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:45.267 20:09:33 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:45.267 20:09:33 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:45.267 20:09:33 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:45.267 20:09:33 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:45.267 20:09:33 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:45.267 20:09:33 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:45.267 20:09:33 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:45.267 20:09:33 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:45.267 20:09:33 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:45.267 20:09:33 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:45.267 20:09:33 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:45.267 20:09:33 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:45.267 20:09:33 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:45.267 20:09:33 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.267 20:09:33 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.267 20:09:33 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.267 20:09:33 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:29:45.267 20:09:33 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.267 20:09:33 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:29:45.267 20:09:33 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:45.267 20:09:33 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:45.267 20:09:33 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:45.267 20:09:33 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:45.267 20:09:33 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:45.267 20:09:33 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:45.267 20:09:33 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:45.267 20:09:33 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:45.267 20:09:33 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:29:45.267 20:09:33 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:29:45.267 20:09:33 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:29:45.267 20:09:33 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:29:45.267 20:09:33 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:29:45.267 20:09:33 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:45.267 20:09:33 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:45.267 20:09:33 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:45.267 20:09:33 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:45.267 20:09:33 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:45.267 20:09:33 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.267 20:09:33 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:45.267 20:09:33 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.267 20:09:33 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:45.267 20:09:33 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:45.267 20:09:33 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:29:45.267 20:09:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:53.407 20:09:39 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:53.407 20:09:39 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:29:53.407 20:09:39 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:53.407 20:09:39 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:53.407 20:09:39 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:53.407 20:09:39 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:53.407 20:09:39 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:53.407 20:09:39 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:29:53.407 20:09:39 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:53.407 20:09:39 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:29:53.407 20:09:39 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:29:53.407 20:09:39 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:29:53.407 20:09:39 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:29:53.407 20:09:39 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:29:53.407 20:09:39 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:29:53.407 20:09:39 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:53.407 20:09:39 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:53.407 20:09:39 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:53.407 20:09:39 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:53.407 20:09:39 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:53.407 20:09:39 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:53.407 20:09:39 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:53.407 20:09:39 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:53.407 20:09:39 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:53.407 20:09:39 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:53.407 20:09:39 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:53.407 20:09:39 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:53.407 20:09:39 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:53.407 20:09:39 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:53.407 20:09:39 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:53.407 20:09:39 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:53.407 20:09:39 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:53.407 20:09:39 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:53.407 20:09:39 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:53.407 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:53.407 20:09:39 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:53.408 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:53.408 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:53.408 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:53.408 20:09:39 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:53.408 20:09:40 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:53.408 20:09:40 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:53.408 20:09:40 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:53.408 20:09:40 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:53.408 20:09:40 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:53.408 20:09:40 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:53.408 20:09:40 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:53.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:53.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:29:53.408 00:29:53.408 --- 10.0.0.2 ping statistics --- 00:29:53.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.408 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:29:53.408 20:09:40 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:53.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:53.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.365 ms 00:29:53.408 00:29:53.408 --- 10.0.0.1 ping statistics --- 00:29:53.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.408 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:29:53.408 20:09:40 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:53.408 20:09:40 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:29:53.408 20:09:40 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:29:53.408 20:09:40 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:55.319 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:55.319 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:55.319 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:55.319 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:55.319 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:55.319 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:55.319 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:55.319 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:55.319 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:55.319 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:29:55.319 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:55.319 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:55.580 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:55.580 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:55.580 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:55.580 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:55.580 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:55.840 20:09:43 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:55.840 20:09:43 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:55.840 20:09:43 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:55.840 20:09:43 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:55.840 20:09:43 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:55.840 20:09:43 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:55.840 20:09:43 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:29:55.840 20:09:43 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:29:55.840 20:09:43 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:55.840 20:09:43 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:55.840 20:09:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:55.840 20:09:43 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3878132 00:29:55.840 20:09:43 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3878132 00:29:55.840 20:09:43 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:29:55.840 20:09:43 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 3878132 ']' 00:29:55.840 20:09:43 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:55.840 20:09:43 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:55.840 20:09:43 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:55.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:55.840 20:09:43 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:55.840 20:09:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:55.840 [2024-07-24 20:09:43.678409] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:29:55.840 [2024-07-24 20:09:43.678456] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:55.840 EAL: No free 2048 kB hugepages reported on node 1 00:29:55.840 [2024-07-24 20:09:43.742010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:56.100 [2024-07-24 20:09:43.805796] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:56.100 [2024-07-24 20:09:43.805832] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:56.100 [2024-07-24 20:09:43.805839] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:56.100 [2024-07-24 20:09:43.805846] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:56.100 [2024-07-24 20:09:43.805851] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:56.100 [2024-07-24 20:09:43.805869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.672 20:09:44 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:56.672 20:09:44 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:29:56.672 20:09:44 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:56.672 20:09:44 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:56.672 20:09:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:56.672 20:09:44 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:56.672 20:09:44 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:29:56.672 20:09:44 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:29:56.672 20:09:44 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.672 20:09:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:56.672 [2024-07-24 20:09:44.456407] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:56.672 20:09:44 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.672 20:09:44 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:29:56.672 20:09:44 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:56.672 20:09:44 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:56.672 20:09:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:56.672 ************************************ 00:29:56.672 START TEST fio_dif_1_default 00:29:56.672 ************************************ 00:29:56.672 20:09:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:29:56.672 20:09:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:29:56.672 20:09:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:29:56.672 20:09:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:29:56.672 20:09:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:29:56.672 20:09:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:29:56.672 20:09:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:56.672 20:09:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.672 20:09:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:56.672 bdev_null0 00:29:56.672 20:09:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.672 20:09:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:56.672 20:09:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.672 20:09:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:56.672 20:09:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.672 20:09:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:56.672 20:09:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.672 20:09:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:56.672 20:09:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.672 20:09:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:56.672 20:09:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.672 20:09:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:56.672 [2024-07-24 20:09:44.540729] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:56.672 20:09:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.672 20:09:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:29:56.672 20:09:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:29:56.672 20:09:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:56.672 20:09:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:29:56.672 20:09:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:56.672 20:09:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:29:56.672 20:09:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:56.672 20:09:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:56.672 20:09:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:56.672 { 00:29:56.672 "params": { 00:29:56.672 "name": "Nvme$subsystem", 00:29:56.672 "trtype": "$TEST_TRANSPORT", 00:29:56.672 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:56.672 "adrfam": "ipv4", 00:29:56.672 "trsvcid": "$NVMF_PORT", 00:29:56.672 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:56.672 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:56.672 "hdgst": ${hdgst:-false}, 00:29:56.672 "ddgst": ${ddgst:-false} 00:29:56.672 }, 00:29:56.672 "method": "bdev_nvme_attach_controller" 00:29:56.672 } 00:29:56.672 EOF 00:29:56.672 )") 00:29:56.673 20:09:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:56.673 20:09:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:29:56.673 20:09:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:56.673 20:09:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:29:56.673 20:09:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:56.673 20:09:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:29:56.673 20:09:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:56.673 20:09:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:29:56.673 20:09:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:56.673 20:09:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:56.673 20:09:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:29:56.673 20:09:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:56.673 20:09:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:29:56.673 20:09:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:56.673 20:09:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:29:56.673 20:09:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:29:56.673 20:09:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:29:56.673 20:09:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:29:56.673 20:09:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:56.673 "params": { 00:29:56.673 "name": "Nvme0", 00:29:56.673 "trtype": "tcp", 00:29:56.673 "traddr": "10.0.0.2", 00:29:56.673 "adrfam": "ipv4", 00:29:56.673 "trsvcid": "4420", 00:29:56.673 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:56.673 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:56.673 "hdgst": false, 00:29:56.673 "ddgst": false 00:29:56.673 }, 00:29:56.673 "method": "bdev_nvme_attach_controller" 00:29:56.673 }' 00:29:56.673 20:09:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:56.673 20:09:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:56.673 20:09:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:56.673 20:09:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:56.673 20:09:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:56.673 20:09:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:56.968 20:09:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:56.968 20:09:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:56.968 20:09:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:56.968 20:09:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:57.235 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:57.235 fio-3.35 00:29:57.235 Starting 1 thread 00:29:57.235 EAL: No free 2048 kB hugepages reported on node 1 00:30:09.523 00:30:09.523 filename0: (groupid=0, jobs=1): err= 0: pid=3878661: Wed Jul 24 20:09:55 2024 00:30:09.523 read: IOPS=181, BW=727KiB/s (745kB/s)(7296KiB/10030msec) 00:30:09.523 slat (nsec): min=5375, max=55043, avg=6619.65, stdev=2110.85 00:30:09.523 clat (usec): min=1101, max=44540, avg=21976.63, stdev=20371.46 00:30:09.523 lat (usec): min=1107, max=44575, avg=21983.25, stdev=20371.54 00:30:09.523 clat percentiles (usec): 00:30:09.523 | 1.00th=[ 1418], 5.00th=[ 1483], 10.00th=[ 1500], 20.00th=[ 1516], 00:30:09.523 | 30.00th=[ 1532], 40.00th=[ 1565], 50.00th=[41681], 60.00th=[42206], 00:30:09.523 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:30:09.523 | 99.00th=[42730], 99.50th=[42730], 99.90th=[44303], 99.95th=[44303], 00:30:09.523 | 99.99th=[44303] 00:30:09.523 bw ( KiB/s): min= 672, max= 768, per=100.00%, avg=728.00, stdev=34.24, samples=20 00:30:09.523 iops : min= 168, max= 192, avg=182.00, stdev= 8.56, samples=20 00:30:09.523 lat (msec) : 2=49.78%, 50=50.22% 00:30:09.523 cpu : usr=94.92%, sys=4.87%, ctx=19, majf=0, minf=243 00:30:09.523 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:09.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.523 issued rwts: total=1824,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:09.523 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:09.523 00:30:09.523 Run status group 0 (all jobs): 00:30:09.523 READ: bw=727KiB/s (745kB/s), 727KiB/s-727KiB/s (745kB/s-745kB/s), io=7296KiB (7471kB), run=10030-10030msec 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.523 00:30:09.523 real 0m11.105s 00:30:09.523 user 0m26.461s 00:30:09.523 sys 0m0.794s 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:09.523 ************************************ 00:30:09.523 END TEST fio_dif_1_default 00:30:09.523 ************************************ 00:30:09.523 20:09:55 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:09.523 20:09:55 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:09.523 20:09:55 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:09.523 20:09:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:09.523 ************************************ 00:30:09.523 START TEST fio_dif_1_multi_subsystems 00:30:09.523 ************************************ 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:09.523 bdev_null0 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:09.523 [2024-07-24 20:09:55.719248] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:09.523 bdev_null1 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.523 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:09.524 { 00:30:09.524 "params": { 00:30:09.524 "name": "Nvme$subsystem", 00:30:09.524 "trtype": "$TEST_TRANSPORT", 00:30:09.524 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:09.524 "adrfam": "ipv4", 00:30:09.524 "trsvcid": "$NVMF_PORT", 00:30:09.524 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:09.524 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:09.524 "hdgst": ${hdgst:-false}, 00:30:09.524 "ddgst": ${ddgst:-false} 00:30:09.524 }, 00:30:09.524 "method": "bdev_nvme_attach_controller" 00:30:09.524 } 00:30:09.524 EOF 00:30:09.524 )") 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:09.524 { 00:30:09.524 "params": { 00:30:09.524 "name": "Nvme$subsystem", 00:30:09.524 "trtype": "$TEST_TRANSPORT", 00:30:09.524 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:09.524 "adrfam": "ipv4", 00:30:09.524 "trsvcid": "$NVMF_PORT", 00:30:09.524 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:09.524 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:09.524 "hdgst": ${hdgst:-false}, 00:30:09.524 "ddgst": ${ddgst:-false} 00:30:09.524 }, 00:30:09.524 "method": "bdev_nvme_attach_controller" 00:30:09.524 } 00:30:09.524 EOF 00:30:09.524 )") 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:09.524 "params": { 00:30:09.524 "name": "Nvme0", 00:30:09.524 "trtype": "tcp", 00:30:09.524 "traddr": "10.0.0.2", 00:30:09.524 "adrfam": "ipv4", 00:30:09.524 "trsvcid": "4420", 00:30:09.524 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:09.524 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:09.524 "hdgst": false, 00:30:09.524 "ddgst": false 00:30:09.524 }, 00:30:09.524 "method": "bdev_nvme_attach_controller" 00:30:09.524 },{ 00:30:09.524 "params": { 00:30:09.524 "name": "Nvme1", 00:30:09.524 "trtype": "tcp", 00:30:09.524 "traddr": "10.0.0.2", 00:30:09.524 "adrfam": "ipv4", 00:30:09.524 "trsvcid": "4420", 00:30:09.524 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:09.524 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:09.524 "hdgst": false, 00:30:09.524 "ddgst": false 00:30:09.524 }, 00:30:09.524 "method": "bdev_nvme_attach_controller" 00:30:09.524 }' 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:09.524 20:09:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:09.524 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:09.524 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:09.524 fio-3.35 00:30:09.524 Starting 2 threads 00:30:09.524 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.547 00:30:19.547 filename0: (groupid=0, jobs=1): err= 0: pid=3880916: Wed Jul 24 20:10:06 2024 00:30:19.547 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10040msec) 00:30:19.547 slat (nsec): min=5378, max=38570, avg=7194.22, stdev=4712.71 00:30:19.547 clat (usec): min=41757, max=43231, avg=41985.76, stdev=88.73 00:30:19.547 lat (usec): min=41763, max=43264, avg=41992.95, stdev=88.98 00:30:19.547 clat percentiles (usec): 00:30:19.547 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:30:19.547 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:30:19.547 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:19.547 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:30:19.547 | 99.99th=[43254] 00:30:19.547 bw ( KiB/s): min= 352, max= 384, per=49.99%, avg=380.80, stdev= 9.85, samples=20 00:30:19.547 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:30:19.547 lat (msec) : 50=100.00% 00:30:19.547 cpu : usr=97.20%, sys=2.59%, ctx=14, majf=0, minf=196 00:30:19.547 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:19.547 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:19.547 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:19.547 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:19.547 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:19.547 filename1: (groupid=0, jobs=1): err= 0: pid=3880917: Wed Jul 24 20:10:06 2024 00:30:19.547 read: IOPS=95, BW=381KiB/s (390kB/s)(3808KiB/10002msec) 00:30:19.547 slat (nsec): min=5370, max=40571, avg=7317.51, stdev=4680.38 00:30:19.547 clat (usec): min=41793, max=43867, avg=42002.30, stdev=173.35 00:30:19.547 lat (usec): min=41798, max=43899, avg=42009.62, stdev=174.39 00:30:19.547 clat percentiles (usec): 00:30:19.547 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:30:19.547 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:30:19.547 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:19.547 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:30:19.547 | 99.99th=[43779] 00:30:19.548 bw ( KiB/s): min= 352, max= 384, per=49.86%, avg=379.20, stdev=11.72, samples=20 00:30:19.548 iops : min= 88, max= 96, avg=94.80, stdev= 2.93, samples=20 00:30:19.548 lat (msec) : 50=100.00% 00:30:19.548 cpu : usr=97.29%, sys=2.50%, ctx=15, majf=0, minf=74 00:30:19.548 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:19.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:19.548 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:19.548 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:19.548 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:19.548 00:30:19.548 Run status group 0 (all jobs): 00:30:19.548 READ: bw=760KiB/s (778kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=7632KiB (7815kB), run=10002-10040msec 00:30:19.548 20:10:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:30:19.548 20:10:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:30:19.548 20:10:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:19.548 20:10:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:19.548 20:10:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:30:19.548 20:10:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:19.548 20:10:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.548 20:10:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:19.548 20:10:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.548 20:10:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:19.548 20:10:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.548 20:10:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:19.548 20:10:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.548 20:10:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:19.548 20:10:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:19.548 20:10:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:30:19.548 20:10:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:19.548 20:10:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.548 20:10:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:19.548 20:10:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.548 20:10:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:19.548 20:10:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.548 20:10:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:19.548 20:10:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.548 00:30:19.548 real 0m11.357s 00:30:19.548 user 0m35.250s 00:30:19.548 sys 0m0.806s 00:30:19.548 20:10:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:19.548 20:10:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:19.548 ************************************ 00:30:19.548 END TEST fio_dif_1_multi_subsystems 00:30:19.548 ************************************ 00:30:19.548 20:10:07 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:30:19.548 20:10:07 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:19.548 20:10:07 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:19.548 20:10:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:19.548 ************************************ 00:30:19.548 START TEST fio_dif_rand_params 00:30:19.548 ************************************ 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:19.548 bdev_null0 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:19.548 [2024-07-24 20:10:07.144714] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:19.548 { 00:30:19.548 "params": { 00:30:19.548 "name": "Nvme$subsystem", 00:30:19.548 "trtype": "$TEST_TRANSPORT", 00:30:19.548 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:19.548 "adrfam": "ipv4", 00:30:19.548 "trsvcid": "$NVMF_PORT", 00:30:19.548 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:19.548 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:19.548 "hdgst": ${hdgst:-false}, 00:30:19.548 "ddgst": ${ddgst:-false} 00:30:19.548 }, 00:30:19.548 "method": "bdev_nvme_attach_controller" 00:30:19.548 } 00:30:19.548 EOF 00:30:19.548 )") 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:19.548 20:10:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:19.548 "params": { 00:30:19.548 "name": "Nvme0", 00:30:19.548 "trtype": "tcp", 00:30:19.548 "traddr": "10.0.0.2", 00:30:19.548 "adrfam": "ipv4", 00:30:19.548 "trsvcid": "4420", 00:30:19.548 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:19.548 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:19.549 "hdgst": false, 00:30:19.549 "ddgst": false 00:30:19.549 }, 00:30:19.549 "method": "bdev_nvme_attach_controller" 00:30:19.549 }' 00:30:19.549 20:10:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:19.549 20:10:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:19.549 20:10:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:19.549 20:10:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:19.549 20:10:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:19.549 20:10:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:19.549 20:10:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:19.549 20:10:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:19.549 20:10:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:19.549 20:10:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:19.813 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:19.813 ... 00:30:19.813 fio-3.35 00:30:19.813 Starting 3 threads 00:30:19.813 EAL: No free 2048 kB hugepages reported on node 1 00:30:26.401 00:30:26.401 filename0: (groupid=0, jobs=1): err= 0: pid=3883374: Wed Jul 24 20:10:13 2024 00:30:26.401 read: IOPS=92, BW=11.6MiB/s (12.2MB/s)(58.2MiB/5019msec) 00:30:26.401 slat (nsec): min=5395, max=31558, avg=7747.58, stdev=1711.54 00:30:26.401 clat (usec): min=7306, max=95739, avg=32292.69, stdev=24345.42 00:30:26.401 lat (usec): min=7312, max=95745, avg=32300.44, stdev=24345.46 00:30:26.401 clat percentiles (usec): 00:30:26.401 | 1.00th=[ 7635], 5.00th=[ 8225], 10.00th=[ 8848], 20.00th=[ 9503], 00:30:26.401 | 30.00th=[10814], 40.00th=[11863], 50.00th=[13829], 60.00th=[51119], 00:30:26.401 | 70.00th=[52691], 80.00th=[53740], 90.00th=[54789], 95.00th=[56361], 00:30:26.401 | 99.00th=[93848], 99.50th=[94897], 99.90th=[95945], 99.95th=[95945], 00:30:26.401 | 99.99th=[95945] 00:30:26.401 bw ( KiB/s): min= 9216, max=15360, per=29.80%, avg=11852.80, stdev=2377.26, samples=10 00:30:26.401 iops : min= 72, max= 120, avg=92.60, stdev=18.57, samples=10 00:30:26.401 lat (msec) : 10=25.32%, 20=27.25%, 50=1.93%, 100=45.49% 00:30:26.401 cpu : usr=96.81%, sys=2.89%, ctx=8, majf=0, minf=43 00:30:26.401 IO depths : 1=5.2%, 2=94.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:26.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:26.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:26.401 issued rwts: total=466,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:26.401 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:26.401 filename0: (groupid=0, jobs=1): err= 0: pid=3883375: Wed Jul 24 20:10:13 2024 00:30:26.401 read: IOPS=102, BW=12.8MiB/s (13.4MB/s)(64.5MiB/5041msec) 00:30:26.401 slat (nsec): min=5387, max=46671, avg=7409.18, stdev=2259.36 00:30:26.401 clat (usec): min=6821, max=93773, avg=29281.91, stdev=21806.71 00:30:26.401 lat (usec): min=6827, max=93780, avg=29289.32, stdev=21806.97 00:30:26.401 clat percentiles (usec): 00:30:26.401 | 1.00th=[ 7570], 5.00th=[ 8225], 10.00th=[ 8717], 20.00th=[ 9503], 00:30:26.401 | 30.00th=[10552], 40.00th=[11731], 50.00th=[13042], 60.00th=[50594], 00:30:26.401 | 70.00th=[52167], 80.00th=[53216], 90.00th=[53740], 95.00th=[54789], 00:30:26.401 | 99.00th=[58459], 99.50th=[92799], 99.90th=[93848], 99.95th=[93848], 00:30:26.401 | 99.99th=[93848] 00:30:26.401 bw ( KiB/s): min= 9216, max=18176, per=33.01%, avg=13132.80, stdev=2933.90, samples=10 00:30:26.401 iops : min= 72, max= 142, avg=102.60, stdev=22.92, samples=10 00:30:26.401 lat (msec) : 10=24.81%, 20=31.98%, 50=1.94%, 100=41.28% 00:30:26.401 cpu : usr=96.96%, sys=2.74%, ctx=7, majf=0, minf=78 00:30:26.401 IO depths : 1=5.0%, 2=95.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:26.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:26.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:26.402 issued rwts: total=516,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:26.402 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:26.402 filename0: (groupid=0, jobs=1): err= 0: pid=3883376: Wed Jul 24 20:10:13 2024 00:30:26.402 read: IOPS=116, BW=14.5MiB/s (15.3MB/s)(73.5MiB/5052msec) 00:30:26.402 slat (nsec): min=5382, max=31757, avg=7555.17, stdev=1889.45 00:30:26.402 clat (usec): min=7270, max=95197, avg=25759.26, stdev=21693.09 00:30:26.402 lat (usec): min=7278, max=95204, avg=25766.82, stdev=21693.09 00:30:26.402 clat percentiles (usec): 00:30:26.402 | 1.00th=[ 7701], 5.00th=[ 8225], 10.00th=[ 8455], 20.00th=[ 9110], 00:30:26.402 | 30.00th=[ 9896], 40.00th=[10814], 50.00th=[11600], 60.00th=[13042], 00:30:26.402 | 70.00th=[51119], 80.00th=[52691], 90.00th=[53740], 95.00th=[54264], 00:30:26.402 | 99.00th=[93848], 99.50th=[94897], 99.90th=[94897], 99.95th=[94897], 00:30:26.402 | 99.99th=[94897] 00:30:26.402 bw ( KiB/s): min= 8448, max=24576, per=37.65%, avg=14976.00, stdev=4067.90, samples=10 00:30:26.402 iops : min= 66, max= 192, avg=117.00, stdev=31.78, samples=10 00:30:26.402 lat (msec) : 10=30.27%, 20=34.86%, 50=1.87%, 100=32.99% 00:30:26.402 cpu : usr=97.11%, sys=2.55%, ctx=9, majf=0, minf=149 00:30:26.402 IO depths : 1=7.3%, 2=92.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:26.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:26.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:26.402 issued rwts: total=588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:26.402 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:26.402 00:30:26.402 Run status group 0 (all jobs): 00:30:26.402 READ: bw=38.8MiB/s (40.7MB/s), 11.6MiB/s-14.5MiB/s (12.2MB/s-15.3MB/s), io=196MiB (206MB), run=5019-5052msec 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:26.402 bdev_null0 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:26.402 [2024-07-24 20:10:13.375731] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:26.402 bdev_null1 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:26.402 bdev_null2 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:26.402 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:26.403 20:10:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:26.403 20:10:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:26.403 { 00:30:26.403 "params": { 00:30:26.403 "name": "Nvme$subsystem", 00:30:26.403 "trtype": "$TEST_TRANSPORT", 00:30:26.403 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:26.403 "adrfam": "ipv4", 00:30:26.403 "trsvcid": "$NVMF_PORT", 00:30:26.403 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:26.403 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:26.403 "hdgst": ${hdgst:-false}, 00:30:26.403 "ddgst": ${ddgst:-false} 00:30:26.403 }, 00:30:26.403 "method": "bdev_nvme_attach_controller" 00:30:26.403 } 00:30:26.403 EOF 00:30:26.403 )") 00:30:26.403 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:26.403 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:26.403 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:26.403 20:10:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:26.403 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:26.403 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:26.403 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:26.403 20:10:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:26.403 20:10:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:26.403 { 00:30:26.403 "params": { 00:30:26.403 "name": "Nvme$subsystem", 00:30:26.403 "trtype": "$TEST_TRANSPORT", 00:30:26.403 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:26.403 "adrfam": "ipv4", 00:30:26.403 "trsvcid": "$NVMF_PORT", 00:30:26.403 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:26.403 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:26.403 "hdgst": ${hdgst:-false}, 00:30:26.403 "ddgst": ${ddgst:-false} 00:30:26.403 }, 00:30:26.403 "method": "bdev_nvme_attach_controller" 00:30:26.403 } 00:30:26.403 EOF 00:30:26.403 )") 00:30:26.403 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:26.403 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:26.403 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:26.403 20:10:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:26.403 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:26.403 20:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:26.403 20:10:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:26.403 20:10:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:26.403 { 00:30:26.403 "params": { 00:30:26.403 "name": "Nvme$subsystem", 00:30:26.403 "trtype": "$TEST_TRANSPORT", 00:30:26.403 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:26.403 "adrfam": "ipv4", 00:30:26.403 "trsvcid": "$NVMF_PORT", 00:30:26.403 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:26.403 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:26.403 "hdgst": ${hdgst:-false}, 00:30:26.403 "ddgst": ${ddgst:-false} 00:30:26.403 }, 00:30:26.403 "method": "bdev_nvme_attach_controller" 00:30:26.403 } 00:30:26.403 EOF 00:30:26.403 )") 00:30:26.403 20:10:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:26.403 20:10:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:26.403 20:10:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:26.403 20:10:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:26.403 "params": { 00:30:26.403 "name": "Nvme0", 00:30:26.403 "trtype": "tcp", 00:30:26.403 "traddr": "10.0.0.2", 00:30:26.403 "adrfam": "ipv4", 00:30:26.403 "trsvcid": "4420", 00:30:26.403 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:26.403 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:26.403 "hdgst": false, 00:30:26.403 "ddgst": false 00:30:26.403 }, 00:30:26.403 "method": "bdev_nvme_attach_controller" 00:30:26.403 },{ 00:30:26.403 "params": { 00:30:26.403 "name": "Nvme1", 00:30:26.403 "trtype": "tcp", 00:30:26.403 "traddr": "10.0.0.2", 00:30:26.403 "adrfam": "ipv4", 00:30:26.403 "trsvcid": "4420", 00:30:26.403 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:26.403 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:26.403 "hdgst": false, 00:30:26.403 "ddgst": false 00:30:26.403 }, 00:30:26.403 "method": "bdev_nvme_attach_controller" 00:30:26.403 },{ 00:30:26.403 "params": { 00:30:26.403 "name": "Nvme2", 00:30:26.403 "trtype": "tcp", 00:30:26.403 "traddr": "10.0.0.2", 00:30:26.403 "adrfam": "ipv4", 00:30:26.403 "trsvcid": "4420", 00:30:26.403 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:26.403 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:26.403 "hdgst": false, 00:30:26.403 "ddgst": false 00:30:26.403 }, 00:30:26.403 "method": "bdev_nvme_attach_controller" 00:30:26.403 }' 00:30:26.403 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:26.403 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:26.403 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:26.403 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:26.403 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:26.403 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:26.403 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:26.403 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:26.403 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:26.403 20:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:26.403 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:26.403 ... 00:30:26.403 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:26.403 ... 00:30:26.403 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:26.403 ... 00:30:26.403 fio-3.35 00:30:26.403 Starting 24 threads 00:30:26.403 EAL: No free 2048 kB hugepages reported on node 1 00:30:38.626 00:30:38.627 filename0: (groupid=0, jobs=1): err= 0: pid=3884785: Wed Jul 24 20:10:25 2024 00:30:38.627 read: IOPS=544, BW=2177KiB/s (2229kB/s)(21.3MiB/10004msec) 00:30:38.627 slat (nsec): min=5574, max=80388, avg=9908.85, stdev=6673.71 00:30:38.627 clat (usec): min=3529, max=66055, avg=29320.75, stdev=6198.57 00:30:38.627 lat (usec): min=3548, max=66063, avg=29330.66, stdev=6199.15 00:30:38.627 clat percentiles (usec): 00:30:38.627 | 1.00th=[ 5997], 5.00th=[17695], 10.00th=[20055], 20.00th=[23725], 00:30:38.627 | 30.00th=[30278], 40.00th=[31065], 50.00th=[31327], 60.00th=[31851], 00:30:38.627 | 70.00th=[32113], 80.00th=[32637], 90.00th=[33162], 95.00th=[33817], 00:30:38.627 | 99.00th=[44827], 99.50th=[50594], 99.90th=[56886], 99.95th=[57934], 00:30:38.627 | 99.99th=[65799] 00:30:38.627 bw ( KiB/s): min= 1916, max= 2554, per=4.58%, avg=2177.16, stdev=171.48, samples=19 00:30:38.627 iops : min= 479, max= 638, avg=544.26, stdev=42.81, samples=19 00:30:38.627 lat (msec) : 4=0.51%, 10=0.66%, 20=8.47%, 50=89.71%, 100=0.64% 00:30:38.627 cpu : usr=98.92%, sys=0.66%, ctx=69, majf=0, minf=40 00:30:38.627 IO depths : 1=4.4%, 2=9.0%, 4=21.5%, 8=56.8%, 16=8.3%, 32=0.0%, >=64=0.0% 00:30:38.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.627 complete : 0=0.0%, 4=93.5%, 8=1.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.627 issued rwts: total=5444,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:38.627 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:38.627 filename0: (groupid=0, jobs=1): err= 0: pid=3884786: Wed Jul 24 20:10:25 2024 00:30:38.627 read: IOPS=495, BW=1980KiB/s (2028kB/s)(19.4MiB/10007msec) 00:30:38.627 slat (usec): min=5, max=113, avg=17.46, stdev=15.40 00:30:38.627 clat (usec): min=7688, max=58038, avg=32196.76, stdev=5076.72 00:30:38.627 lat (usec): min=7694, max=58044, avg=32214.22, stdev=5077.36 00:30:38.627 clat percentiles (usec): 00:30:38.627 | 1.00th=[19268], 5.00th=[23200], 10.00th=[28443], 20.00th=[30802], 00:30:38.627 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:30:38.627 | 70.00th=[32637], 80.00th=[33162], 90.00th=[38011], 95.00th=[42730], 00:30:38.627 | 99.00th=[50594], 99.50th=[51643], 99.90th=[55837], 99.95th=[57934], 00:30:38.627 | 99.99th=[57934] 00:30:38.627 bw ( KiB/s): min= 1872, max= 2112, per=4.16%, avg=1976.95, stdev=76.75, samples=19 00:30:38.627 iops : min= 468, max= 528, avg=494.16, stdev=19.09, samples=19 00:30:38.627 lat (msec) : 10=0.04%, 20=1.33%, 50=97.42%, 100=1.21% 00:30:38.627 cpu : usr=99.22%, sys=0.47%, ctx=18, majf=0, minf=20 00:30:38.627 IO depths : 1=2.4%, 2=5.9%, 4=17.4%, 8=63.0%, 16=11.3%, 32=0.0%, >=64=0.0% 00:30:38.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.627 complete : 0=0.0%, 4=92.7%, 8=2.4%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.627 issued rwts: total=4954,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:38.627 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:38.627 filename0: (groupid=0, jobs=1): err= 0: pid=3884787: Wed Jul 24 20:10:25 2024 00:30:38.627 read: IOPS=501, BW=2005KiB/s (2054kB/s)(19.6MiB/10001msec) 00:30:38.627 slat (nsec): min=5563, max=98435, avg=18169.71, stdev=14586.56 00:30:38.627 clat (usec): min=14906, max=55802, avg=31771.25, stdev=4671.12 00:30:38.627 lat (usec): min=14923, max=55807, avg=31789.42, stdev=4670.72 00:30:38.627 clat percentiles (usec): 00:30:38.627 | 1.00th=[18744], 5.00th=[22676], 10.00th=[27657], 20.00th=[30540], 00:30:38.627 | 30.00th=[31065], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:30:38.627 | 70.00th=[32375], 80.00th=[32900], 90.00th=[34341], 95.00th=[40633], 00:30:38.627 | 99.00th=[50070], 99.50th=[51643], 99.90th=[55837], 99.95th=[55837], 00:30:38.627 | 99.99th=[55837] 00:30:38.627 bw ( KiB/s): min= 1888, max= 2096, per=4.21%, avg=2002.79, stdev=63.10, samples=19 00:30:38.627 iops : min= 472, max= 524, avg=500.58, stdev=15.77, samples=19 00:30:38.627 lat (msec) : 20=1.68%, 50=97.37%, 100=0.96% 00:30:38.627 cpu : usr=98.88%, sys=0.68%, ctx=98, majf=0, minf=29 00:30:38.627 IO depths : 1=3.0%, 2=7.0%, 4=19.5%, 8=60.4%, 16=10.2%, 32=0.0%, >=64=0.0% 00:30:38.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.627 complete : 0=0.0%, 4=93.3%, 8=1.4%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.627 issued rwts: total=5014,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:38.627 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:38.627 filename0: (groupid=0, jobs=1): err= 0: pid=3884788: Wed Jul 24 20:10:25 2024 00:30:38.627 read: IOPS=490, BW=1963KiB/s (2010kB/s)(19.2MiB/10019msec) 00:30:38.627 slat (nsec): min=5556, max=92807, avg=16327.12, stdev=14561.48 00:30:38.627 clat (usec): min=15685, max=60752, avg=32478.46, stdev=5071.36 00:30:38.627 lat (usec): min=15691, max=60776, avg=32494.79, stdev=5071.31 00:30:38.627 clat percentiles (usec): 00:30:38.627 | 1.00th=[19006], 5.00th=[23200], 10.00th=[29492], 20.00th=[30802], 00:30:38.627 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:30:38.627 | 70.00th=[32637], 80.00th=[33162], 90.00th=[39584], 95.00th=[42730], 00:30:38.627 | 99.00th=[49021], 99.50th=[52691], 99.90th=[59507], 99.95th=[59507], 00:30:38.627 | 99.99th=[60556] 00:30:38.627 bw ( KiB/s): min= 1792, max= 2123, per=4.12%, avg=1959.65, stdev=84.18, samples=20 00:30:38.627 iops : min= 448, max= 530, avg=489.80, stdev=20.95, samples=20 00:30:38.627 lat (msec) : 20=1.67%, 50=97.40%, 100=0.94% 00:30:38.627 cpu : usr=99.22%, sys=0.48%, ctx=12, majf=0, minf=33 00:30:38.627 IO depths : 1=2.8%, 2=6.0%, 4=17.4%, 8=63.3%, 16=10.4%, 32=0.0%, >=64=0.0% 00:30:38.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.627 complete : 0=0.0%, 4=92.7%, 8=2.1%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.627 issued rwts: total=4917,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:38.627 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:38.627 filename0: (groupid=0, jobs=1): err= 0: pid=3884789: Wed Jul 24 20:10:25 2024 00:30:38.627 read: IOPS=497, BW=1988KiB/s (2036kB/s)(19.5MiB/10030msec) 00:30:38.627 slat (nsec): min=5388, max=99140, avg=12335.52, stdev=10638.57 00:30:38.627 clat (usec): min=5737, max=59644, avg=32122.14, stdev=5584.19 00:30:38.627 lat (usec): min=5754, max=59651, avg=32134.47, stdev=5584.61 00:30:38.627 clat percentiles (usec): 00:30:38.627 | 1.00th=[13960], 5.00th=[22676], 10.00th=[27919], 20.00th=[30540], 00:30:38.627 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:30:38.627 | 70.00th=[32637], 80.00th=[33162], 90.00th=[38011], 95.00th=[43254], 00:30:38.627 | 99.00th=[50070], 99.50th=[52167], 99.90th=[59507], 99.95th=[59507], 00:30:38.627 | 99.99th=[59507] 00:30:38.627 bw ( KiB/s): min= 1840, max= 2160, per=4.18%, avg=1986.70, stdev=77.97, samples=20 00:30:38.627 iops : min= 460, max= 540, avg=496.60, stdev=19.45, samples=20 00:30:38.627 lat (msec) : 10=0.64%, 20=2.39%, 50=95.93%, 100=1.04% 00:30:38.627 cpu : usr=98.88%, sys=0.74%, ctx=30, majf=0, minf=32 00:30:38.627 IO depths : 1=1.1%, 2=2.4%, 4=9.9%, 8=74.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:30:38.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.627 complete : 0=0.0%, 4=90.3%, 8=5.0%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.627 issued rwts: total=4985,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:38.627 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:38.627 filename0: (groupid=0, jobs=1): err= 0: pid=3884790: Wed Jul 24 20:10:25 2024 00:30:38.627 read: IOPS=494, BW=1979KiB/s (2027kB/s)(19.3MiB/10005msec) 00:30:38.627 slat (usec): min=5, max=100, avg=19.03, stdev=14.61 00:30:38.627 clat (usec): min=5647, max=57214, avg=32180.93, stdev=4695.10 00:30:38.627 lat (usec): min=5655, max=57221, avg=32199.96, stdev=4694.37 00:30:38.627 clat percentiles (usec): 00:30:38.627 | 1.00th=[18744], 5.00th=[26084], 10.00th=[29754], 20.00th=[30802], 00:30:38.627 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:30:38.627 | 70.00th=[32375], 80.00th=[32900], 90.00th=[34866], 95.00th=[41681], 00:30:38.627 | 99.00th=[50070], 99.50th=[52167], 99.90th=[53740], 99.95th=[57410], 00:30:38.627 | 99.99th=[57410] 00:30:38.627 bw ( KiB/s): min= 1792, max= 2048, per=4.13%, avg=1962.89, stdev=74.99, samples=19 00:30:38.627 iops : min= 448, max= 512, avg=490.68, stdev=18.70, samples=19 00:30:38.627 lat (msec) : 10=0.32%, 20=1.31%, 50=97.35%, 100=1.01% 00:30:38.627 cpu : usr=98.88%, sys=0.77%, ctx=36, majf=0, minf=34 00:30:38.627 IO depths : 1=3.4%, 2=6.9%, 4=18.0%, 8=61.4%, 16=10.3%, 32=0.0%, >=64=0.0% 00:30:38.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.627 complete : 0=0.0%, 4=92.6%, 8=2.9%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.627 issued rwts: total=4951,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:38.627 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:38.627 filename0: (groupid=0, jobs=1): err= 0: pid=3884791: Wed Jul 24 20:10:25 2024 00:30:38.627 read: IOPS=511, BW=2046KiB/s (2095kB/s)(20.0MiB/10001msec) 00:30:38.627 slat (usec): min=5, max=111, avg=15.54, stdev=13.77 00:30:38.627 clat (usec): min=10748, max=56340, avg=31152.54, stdev=5269.15 00:30:38.627 lat (usec): min=10756, max=56346, avg=31168.08, stdev=5270.23 00:30:38.627 clat percentiles (usec): 00:30:38.627 | 1.00th=[15401], 5.00th=[20317], 10.00th=[24773], 20.00th=[30016], 00:30:38.627 | 30.00th=[30802], 40.00th=[31327], 50.00th=[31589], 60.00th=[32113], 00:30:38.627 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33424], 95.00th=[39060], 00:30:38.627 | 99.00th=[48497], 99.50th=[51643], 99.90th=[55313], 99.95th=[56361], 00:30:38.627 | 99.99th=[56361] 00:30:38.627 bw ( KiB/s): min= 1872, max= 2192, per=4.31%, avg=2046.32, stdev=73.49, samples=19 00:30:38.627 iops : min= 468, max= 548, avg=511.58, stdev=18.37, samples=19 00:30:38.627 lat (msec) : 20=4.83%, 50=94.35%, 100=0.82% 00:30:38.627 cpu : usr=99.08%, sys=0.59%, ctx=11, majf=0, minf=40 00:30:38.627 IO depths : 1=3.9%, 2=8.0%, 4=19.9%, 8=59.3%, 16=9.0%, 32=0.0%, >=64=0.0% 00:30:38.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.627 complete : 0=0.0%, 4=93.1%, 8=1.5%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.627 issued rwts: total=5116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:38.627 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:38.627 filename0: (groupid=0, jobs=1): err= 0: pid=3884793: Wed Jul 24 20:10:25 2024 00:30:38.627 read: IOPS=494, BW=1979KiB/s (2027kB/s)(19.3MiB/10009msec) 00:30:38.627 slat (usec): min=5, max=175, avg=19.52, stdev=14.52 00:30:38.627 clat (usec): min=14291, max=57384, avg=32161.85, stdev=4466.93 00:30:38.628 lat (usec): min=14297, max=57392, avg=32181.37, stdev=4466.30 00:30:38.628 clat percentiles (usec): 00:30:38.628 | 1.00th=[20579], 5.00th=[25297], 10.00th=[29754], 20.00th=[30802], 00:30:38.628 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:30:38.628 | 70.00th=[32375], 80.00th=[32900], 90.00th=[34341], 95.00th=[41157], 00:30:38.628 | 99.00th=[49546], 99.50th=[51643], 99.90th=[57410], 99.95th=[57410], 00:30:38.628 | 99.99th=[57410] 00:30:38.628 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1970.16, stdev=79.67, samples=19 00:30:38.628 iops : min= 448, max= 512, avg=492.42, stdev=19.81, samples=19 00:30:38.628 lat (msec) : 20=0.87%, 50=98.22%, 100=0.91% 00:30:38.628 cpu : usr=96.16%, sys=1.97%, ctx=38, majf=0, minf=36 00:30:38.628 IO depths : 1=4.2%, 2=8.4%, 4=20.0%, 8=58.4%, 16=8.9%, 32=0.0%, >=64=0.0% 00:30:38.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.628 complete : 0=0.0%, 4=92.9%, 8=2.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.628 issued rwts: total=4953,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:38.628 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:38.628 filename1: (groupid=0, jobs=1): err= 0: pid=3884794: Wed Jul 24 20:10:25 2024 00:30:38.628 read: IOPS=490, BW=1963KiB/s (2011kB/s)(19.2MiB/10023msec) 00:30:38.628 slat (usec): min=5, max=100, avg=18.67, stdev=14.84 00:30:38.628 clat (usec): min=14663, max=53812, avg=32445.56, stdev=4927.14 00:30:38.628 lat (usec): min=14670, max=53821, avg=32464.24, stdev=4927.15 00:30:38.628 clat percentiles (usec): 00:30:38.628 | 1.00th=[18482], 5.00th=[23987], 10.00th=[29492], 20.00th=[30802], 00:30:38.628 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:30:38.628 | 70.00th=[32637], 80.00th=[33162], 90.00th=[39060], 95.00th=[42730], 00:30:38.628 | 99.00th=[49021], 99.50th=[50594], 99.90th=[53216], 99.95th=[53740], 00:30:38.628 | 99.99th=[53740] 00:30:38.628 bw ( KiB/s): min= 1792, max= 2120, per=4.13%, avg=1962.40, stdev=82.33, samples=20 00:30:38.628 iops : min= 448, max= 530, avg=490.60, stdev=20.58, samples=20 00:30:38.628 lat (msec) : 20=1.83%, 50=97.62%, 100=0.55% 00:30:38.628 cpu : usr=98.56%, sys=0.82%, ctx=30, majf=0, minf=39 00:30:38.628 IO depths : 1=3.3%, 2=6.5%, 4=17.4%, 8=62.7%, 16=10.1%, 32=0.0%, >=64=0.0% 00:30:38.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.628 complete : 0=0.0%, 4=92.4%, 8=2.7%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.628 issued rwts: total=4920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:38.628 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:38.628 filename1: (groupid=0, jobs=1): err= 0: pid=3884795: Wed Jul 24 20:10:25 2024 00:30:38.628 read: IOPS=473, BW=1895KiB/s (1941kB/s)(18.5MiB/10001msec) 00:30:38.628 slat (nsec): min=5559, max=88966, avg=17348.61, stdev=13684.38 00:30:38.628 clat (usec): min=7423, max=59657, avg=33631.66, stdev=6179.73 00:30:38.628 lat (usec): min=7429, max=59664, avg=33649.01, stdev=6178.35 00:30:38.628 clat percentiles (usec): 00:30:38.628 | 1.00th=[19268], 5.00th=[23987], 10.00th=[29492], 20.00th=[30802], 00:30:38.628 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:30:38.628 | 70.00th=[33162], 80.00th=[38536], 90.00th=[43254], 95.00th=[46400], 00:30:38.628 | 99.00th=[51119], 99.50th=[53216], 99.90th=[55313], 99.95th=[55837], 00:30:38.628 | 99.99th=[59507] 00:30:38.628 bw ( KiB/s): min= 1660, max= 2048, per=3.97%, avg=1887.11, stdev=109.60, samples=19 00:30:38.628 iops : min= 415, max= 512, avg=471.74, stdev=27.34, samples=19 00:30:38.628 lat (msec) : 10=0.13%, 20=1.48%, 50=97.09%, 100=1.31% 00:30:38.628 cpu : usr=99.13%, sys=0.54%, ctx=36, majf=0, minf=50 00:30:38.628 IO depths : 1=2.9%, 2=6.4%, 4=17.2%, 8=62.7%, 16=10.8%, 32=0.0%, >=64=0.0% 00:30:38.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.628 complete : 0=0.0%, 4=92.6%, 8=2.5%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.628 issued rwts: total=4739,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:38.628 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:38.628 filename1: (groupid=0, jobs=1): err= 0: pid=3884796: Wed Jul 24 20:10:25 2024 00:30:38.628 read: IOPS=460, BW=1841KiB/s (1885kB/s)(18.0MiB/10010msec) 00:30:38.628 slat (nsec): min=5563, max=99050, avg=15937.96, stdev=13360.02 00:30:38.628 clat (usec): min=12908, max=68503, avg=34660.86, stdev=6797.56 00:30:38.628 lat (usec): min=12916, max=68511, avg=34676.79, stdev=6796.76 00:30:38.628 clat percentiles (usec): 00:30:38.628 | 1.00th=[18744], 5.00th=[25035], 10.00th=[30016], 20.00th=[31065], 00:30:38.628 | 30.00th=[31589], 40.00th=[32113], 50.00th=[32375], 60.00th=[32900], 00:30:38.628 | 70.00th=[35914], 80.00th=[40633], 90.00th=[44303], 95.00th=[47973], 00:30:38.628 | 99.00th=[55313], 99.50th=[56886], 99.90th=[62653], 99.95th=[68682], 00:30:38.628 | 99.99th=[68682] 00:30:38.628 bw ( KiB/s): min= 1664, max= 2032, per=3.87%, avg=1837.05, stdev=98.85, samples=19 00:30:38.628 iops : min= 416, max= 508, avg=459.26, stdev=24.71, samples=19 00:30:38.628 lat (msec) : 20=1.43%, 50=95.49%, 100=3.08% 00:30:38.628 cpu : usr=99.12%, sys=0.58%, ctx=14, majf=0, minf=37 00:30:38.628 IO depths : 1=0.4%, 2=1.8%, 4=11.7%, 8=72.3%, 16=13.8%, 32=0.0%, >=64=0.0% 00:30:38.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.628 complete : 0=0.0%, 4=91.3%, 8=4.5%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.628 issued rwts: total=4607,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:38.628 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:38.628 filename1: (groupid=0, jobs=1): err= 0: pid=3884797: Wed Jul 24 20:10:25 2024 00:30:38.628 read: IOPS=525, BW=2104KiB/s (2154kB/s)(20.6MiB/10028msec) 00:30:38.628 slat (usec): min=5, max=102, avg=12.31, stdev=11.51 00:30:38.628 clat (usec): min=9934, max=56942, avg=30311.78, stdev=5422.05 00:30:38.628 lat (usec): min=9942, max=56948, avg=30324.08, stdev=5423.31 00:30:38.628 clat percentiles (usec): 00:30:38.628 | 1.00th=[15401], 5.00th=[19792], 10.00th=[21890], 20.00th=[27395], 00:30:38.628 | 30.00th=[30278], 40.00th=[31065], 50.00th=[31589], 60.00th=[31851], 00:30:38.628 | 70.00th=[32113], 80.00th=[32637], 90.00th=[33817], 95.00th=[37487], 00:30:38.628 | 99.00th=[45876], 99.50th=[48497], 99.90th=[50070], 99.95th=[50070], 00:30:38.628 | 99.99th=[56886] 00:30:38.628 bw ( KiB/s): min= 1920, max= 2400, per=4.43%, avg=2104.80, stdev=113.58, samples=20 00:30:38.628 iops : min= 480, max= 600, avg=526.05, stdev=28.37, samples=20 00:30:38.628 lat (msec) : 10=0.06%, 20=5.54%, 50=94.37%, 100=0.04% 00:30:38.628 cpu : usr=99.07%, sys=0.59%, ctx=69, majf=0, minf=44 00:30:38.628 IO depths : 1=2.7%, 2=5.3%, 4=13.0%, 8=67.1%, 16=11.9%, 32=0.0%, >=64=0.0% 00:30:38.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.628 complete : 0=0.0%, 4=91.5%, 8=4.7%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.628 issued rwts: total=5274,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:38.628 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:38.628 filename1: (groupid=0, jobs=1): err= 0: pid=3884798: Wed Jul 24 20:10:25 2024 00:30:38.628 read: IOPS=480, BW=1924KiB/s (1970kB/s)(18.8MiB/10023msec) 00:30:38.628 slat (usec): min=5, max=102, avg=16.73, stdev=16.11 00:30:38.628 clat (usec): min=12232, max=60621, avg=33188.64, stdev=5445.06 00:30:38.628 lat (usec): min=12239, max=60630, avg=33205.37, stdev=5444.09 00:30:38.628 clat percentiles (usec): 00:30:38.628 | 1.00th=[19006], 5.00th=[25822], 10.00th=[30016], 20.00th=[31065], 00:30:38.628 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:30:38.628 | 70.00th=[32900], 80.00th=[33817], 90.00th=[41157], 95.00th=[44303], 00:30:38.628 | 99.00th=[51643], 99.50th=[53216], 99.90th=[58983], 99.95th=[60556], 00:30:38.628 | 99.99th=[60556] 00:30:38.628 bw ( KiB/s): min= 1664, max= 2048, per=4.04%, avg=1921.60, stdev=111.05, samples=20 00:30:38.628 iops : min= 416, max= 512, avg=480.40, stdev=27.76, samples=20 00:30:38.628 lat (msec) : 20=1.33%, 50=96.85%, 100=1.83% 00:30:38.628 cpu : usr=98.83%, sys=0.71%, ctx=191, majf=0, minf=38 00:30:38.628 IO depths : 1=0.9%, 2=1.9%, 4=6.6%, 8=75.5%, 16=15.1%, 32=0.0%, >=64=0.0% 00:30:38.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.628 complete : 0=0.0%, 4=90.4%, 8=7.1%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.628 issued rwts: total=4820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:38.628 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:38.628 filename1: (groupid=0, jobs=1): err= 0: pid=3884799: Wed Jul 24 20:10:25 2024 00:30:38.628 read: IOPS=502, BW=2009KiB/s (2057kB/s)(19.6MiB/10005msec) 00:30:38.628 slat (nsec): min=5407, max=89941, avg=12163.17, stdev=10352.16 00:30:38.628 clat (usec): min=12181, max=62738, avg=31756.13, stdev=2052.40 00:30:38.628 lat (usec): min=12186, max=62758, avg=31768.29, stdev=2052.86 00:30:38.628 clat percentiles (usec): 00:30:38.628 | 1.00th=[29230], 5.00th=[30016], 10.00th=[30278], 20.00th=[31065], 00:30:38.628 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:30:38.628 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33424], 00:30:38.628 | 99.00th=[34341], 99.50th=[34866], 99.90th=[50070], 99.95th=[50070], 00:30:38.628 | 99.99th=[62653] 00:30:38.628 bw ( KiB/s): min= 1916, max= 2052, per=4.21%, avg=2000.32, stdev=63.51, samples=19 00:30:38.628 iops : min= 479, max= 513, avg=500.00, stdev=15.82, samples=19 00:30:38.628 lat (msec) : 20=0.64%, 50=99.04%, 100=0.32% 00:30:38.628 cpu : usr=99.28%, sys=0.42%, ctx=11, majf=0, minf=28 00:30:38.628 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:30:38.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.628 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.628 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:38.628 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:38.628 filename1: (groupid=0, jobs=1): err= 0: pid=3884800: Wed Jul 24 20:10:25 2024 00:30:38.628 read: IOPS=501, BW=2008KiB/s (2056kB/s)(19.6MiB/10001msec) 00:30:38.628 slat (usec): min=5, max=117, avg=16.03, stdev=12.96 00:30:38.628 clat (usec): min=15740, max=58074, avg=31778.48, stdev=3599.81 00:30:38.628 lat (usec): min=15746, max=58082, avg=31794.51, stdev=3599.97 00:30:38.628 clat percentiles (usec): 00:30:38.628 | 1.00th=[19530], 5.00th=[26608], 10.00th=[30016], 20.00th=[30802], 00:30:38.628 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:30:38.628 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33424], 95.00th=[34866], 00:30:38.628 | 99.00th=[45876], 99.50th=[50594], 99.90th=[57934], 99.95th=[57934], 00:30:38.628 | 99.99th=[57934] 00:30:38.628 bw ( KiB/s): min= 1920, max= 2144, per=4.23%, avg=2012.63, stdev=54.28, samples=19 00:30:38.628 iops : min= 480, max= 536, avg=503.16, stdev=13.57, samples=19 00:30:38.628 lat (msec) : 20=1.18%, 50=98.31%, 100=0.52% 00:30:38.629 cpu : usr=98.46%, sys=0.86%, ctx=48, majf=0, minf=47 00:30:38.629 IO depths : 1=1.1%, 2=2.4%, 4=11.2%, 8=73.6%, 16=11.7%, 32=0.0%, >=64=0.0% 00:30:38.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.629 complete : 0=0.0%, 4=90.5%, 8=4.1%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.629 issued rwts: total=5020,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:38.629 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:38.629 filename1: (groupid=0, jobs=1): err= 0: pid=3884802: Wed Jul 24 20:10:25 2024 00:30:38.629 read: IOPS=476, BW=1907KiB/s (1953kB/s)(18.7MiB/10022msec) 00:30:38.629 slat (nsec): min=5033, max=91928, avg=17016.39, stdev=13783.44 00:30:38.629 clat (usec): min=17182, max=61990, avg=33427.20, stdev=6166.02 00:30:38.629 lat (usec): min=17195, max=61997, avg=33444.21, stdev=6166.39 00:30:38.629 clat percentiles (usec): 00:30:38.629 | 1.00th=[19792], 5.00th=[22938], 10.00th=[28443], 20.00th=[30802], 00:30:38.629 | 30.00th=[31327], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:30:38.629 | 70.00th=[33162], 80.00th=[38011], 90.00th=[42730], 95.00th=[45351], 00:30:38.629 | 99.00th=[52691], 99.50th=[54789], 99.90th=[58459], 99.95th=[62129], 00:30:38.629 | 99.99th=[62129] 00:30:38.629 bw ( KiB/s): min= 1788, max= 2048, per=4.01%, avg=1904.45, stdev=86.07, samples=20 00:30:38.629 iops : min= 447, max= 512, avg=476.10, stdev=21.52, samples=20 00:30:38.629 lat (msec) : 20=1.07%, 50=96.99%, 100=1.95% 00:30:38.629 cpu : usr=98.20%, sys=1.14%, ctx=30, majf=0, minf=44 00:30:38.629 IO depths : 1=3.2%, 2=6.5%, 4=17.9%, 8=62.5%, 16=9.9%, 32=0.0%, >=64=0.0% 00:30:38.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.629 complete : 0=0.0%, 4=92.4%, 8=2.4%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.629 issued rwts: total=4778,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:38.629 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:38.629 filename2: (groupid=0, jobs=1): err= 0: pid=3884803: Wed Jul 24 20:10:25 2024 00:30:38.629 read: IOPS=501, BW=2007KiB/s (2056kB/s)(19.6MiB/10011msec) 00:30:38.629 slat (usec): min=5, max=101, avg=15.92, stdev=13.76 00:30:38.629 clat (usec): min=12659, max=43711, avg=31748.28, stdev=1750.76 00:30:38.629 lat (usec): min=12665, max=43729, avg=31764.20, stdev=1750.73 00:30:38.629 clat percentiles (usec): 00:30:38.629 | 1.00th=[28181], 5.00th=[30016], 10.00th=[30540], 20.00th=[31065], 00:30:38.629 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:30:38.629 | 70.00th=[32113], 80.00th=[32637], 90.00th=[33162], 95.00th=[33424], 00:30:38.629 | 99.00th=[34866], 99.50th=[34866], 99.90th=[43779], 99.95th=[43779], 00:30:38.629 | 99.99th=[43779] 00:30:38.629 bw ( KiB/s): min= 1916, max= 2048, per=4.21%, avg=1999.74, stdev=62.72, samples=19 00:30:38.629 iops : min= 479, max= 512, avg=499.74, stdev=15.62, samples=19 00:30:38.629 lat (msec) : 20=0.32%, 50=99.68% 00:30:38.629 cpu : usr=98.69%, sys=0.71%, ctx=29, majf=0, minf=35 00:30:38.629 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:38.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.629 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.629 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:38.629 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:38.629 filename2: (groupid=0, jobs=1): err= 0: pid=3884804: Wed Jul 24 20:10:25 2024 00:30:38.629 read: IOPS=466, BW=1867KiB/s (1911kB/s)(18.2MiB/10005msec) 00:30:38.629 slat (usec): min=5, max=102, avg=17.02, stdev=14.66 00:30:38.629 clat (usec): min=7487, max=63570, avg=34181.61, stdev=6680.58 00:30:38.629 lat (usec): min=7493, max=63585, avg=34198.63, stdev=6679.78 00:30:38.629 clat percentiles (usec): 00:30:38.629 | 1.00th=[17957], 5.00th=[23725], 10.00th=[29754], 20.00th=[30802], 00:30:38.629 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32375], 60.00th=[32900], 00:30:38.629 | 70.00th=[34341], 80.00th=[40109], 90.00th=[43779], 95.00th=[45876], 00:30:38.629 | 99.00th=[52691], 99.50th=[54789], 99.90th=[63701], 99.95th=[63701], 00:30:38.629 | 99.99th=[63701] 00:30:38.629 bw ( KiB/s): min= 1664, max= 2011, per=3.90%, avg=1853.42, stdev=88.46, samples=19 00:30:38.629 iops : min= 416, max= 502, avg=463.32, stdev=22.04, samples=19 00:30:38.629 lat (msec) : 10=0.06%, 20=1.88%, 50=95.46%, 100=2.59% 00:30:38.629 cpu : usr=99.06%, sys=0.62%, ctx=25, majf=0, minf=33 00:30:38.629 IO depths : 1=1.1%, 2=2.4%, 4=11.2%, 8=71.8%, 16=13.5%, 32=0.0%, >=64=0.0% 00:30:38.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.629 complete : 0=0.0%, 4=91.1%, 8=5.1%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.629 issued rwts: total=4669,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:38.629 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:38.629 filename2: (groupid=0, jobs=1): err= 0: pid=3884805: Wed Jul 24 20:10:25 2024 00:30:38.629 read: IOPS=519, BW=2077KiB/s (2127kB/s)(20.3MiB/10024msec) 00:30:38.629 slat (usec): min=5, max=108, avg=17.40, stdev=15.47 00:30:38.629 clat (usec): min=10892, max=58451, avg=30642.86, stdev=5182.43 00:30:38.629 lat (usec): min=10902, max=58458, avg=30660.26, stdev=5184.32 00:30:38.629 clat percentiles (usec): 00:30:38.629 | 1.00th=[15926], 5.00th=[20055], 10.00th=[22676], 20.00th=[30016], 00:30:38.629 | 30.00th=[30802], 40.00th=[31065], 50.00th=[31589], 60.00th=[31851], 00:30:38.629 | 70.00th=[32113], 80.00th=[32637], 90.00th=[33424], 95.00th=[35914], 00:30:38.629 | 99.00th=[47973], 99.50th=[51119], 99.90th=[54789], 99.95th=[55313], 00:30:38.629 | 99.99th=[58459] 00:30:38.629 bw ( KiB/s): min= 1920, max= 2352, per=4.37%, avg=2077.40, stdev=99.00, samples=20 00:30:38.629 iops : min= 480, max= 588, avg=519.20, stdev=24.71, samples=20 00:30:38.629 lat (msec) : 20=4.94%, 50=94.33%, 100=0.73% 00:30:38.629 cpu : usr=98.86%, sys=0.69%, ctx=161, majf=0, minf=44 00:30:38.629 IO depths : 1=3.3%, 2=7.7%, 4=20.6%, 8=58.9%, 16=9.5%, 32=0.0%, >=64=0.0% 00:30:38.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.629 complete : 0=0.0%, 4=93.4%, 8=1.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.629 issued rwts: total=5206,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:38.629 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:38.629 filename2: (groupid=0, jobs=1): err= 0: pid=3884806: Wed Jul 24 20:10:25 2024 00:30:38.629 read: IOPS=491, BW=1966KiB/s (2014kB/s)(19.2MiB/10002msec) 00:30:38.629 slat (usec): min=5, max=114, avg=20.82, stdev=16.30 00:30:38.629 clat (usec): min=7653, max=53485, avg=32400.59, stdev=4267.00 00:30:38.629 lat (usec): min=7659, max=53491, avg=32421.41, stdev=4266.76 00:30:38.629 clat percentiles (usec): 00:30:38.629 | 1.00th=[19530], 5.00th=[28967], 10.00th=[30278], 20.00th=[31065], 00:30:38.629 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:30:38.629 | 70.00th=[32637], 80.00th=[33162], 90.00th=[34866], 95.00th=[42206], 00:30:38.629 | 99.00th=[48497], 99.50th=[50070], 99.90th=[53216], 99.95th=[53216], 00:30:38.629 | 99.99th=[53740] 00:30:38.629 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1955.37, stdev=73.78, samples=19 00:30:38.629 iops : min= 448, max= 512, avg=488.84, stdev=18.45, samples=19 00:30:38.629 lat (msec) : 10=0.12%, 20=1.08%, 50=98.27%, 100=0.53% 00:30:38.629 cpu : usr=98.99%, sys=0.64%, ctx=46, majf=0, minf=41 00:30:38.629 IO depths : 1=1.7%, 2=4.4%, 4=14.1%, 8=68.6%, 16=11.2%, 32=0.0%, >=64=0.0% 00:30:38.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.629 complete : 0=0.0%, 4=91.4%, 8=3.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.629 issued rwts: total=4917,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:38.629 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:38.629 filename2: (groupid=0, jobs=1): err= 0: pid=3884807: Wed Jul 24 20:10:25 2024 00:30:38.629 read: IOPS=487, BW=1950KiB/s (1997kB/s)(19.1MiB/10011msec) 00:30:38.629 slat (nsec): min=5558, max=92734, avg=16707.99, stdev=14465.89 00:30:38.629 clat (usec): min=15328, max=62667, avg=32718.09, stdev=4208.70 00:30:38.629 lat (usec): min=15374, max=62677, avg=32734.79, stdev=4208.09 00:30:38.629 clat percentiles (usec): 00:30:38.629 | 1.00th=[20841], 5.00th=[29754], 10.00th=[30540], 20.00th=[31065], 00:30:38.629 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:30:38.629 | 70.00th=[32637], 80.00th=[33162], 90.00th=[35914], 95.00th=[42206], 00:30:38.629 | 99.00th=[49021], 99.50th=[50070], 99.90th=[60031], 99.95th=[61080], 00:30:38.629 | 99.99th=[62653] 00:30:38.629 bw ( KiB/s): min= 1664, max= 2048, per=4.10%, avg=1949.58, stdev=96.76, samples=19 00:30:38.629 iops : min= 416, max= 512, avg=487.32, stdev=24.11, samples=19 00:30:38.629 lat (msec) : 20=0.76%, 50=98.67%, 100=0.57% 00:30:38.629 cpu : usr=98.96%, sys=0.59%, ctx=146, majf=0, minf=36 00:30:38.629 IO depths : 1=0.3%, 2=0.9%, 4=5.5%, 8=76.4%, 16=16.9%, 32=0.0%, >=64=0.0% 00:30:38.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.629 complete : 0=0.0%, 4=91.6%, 8=5.5%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.629 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:38.629 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:38.629 filename2: (groupid=0, jobs=1): err= 0: pid=3884808: Wed Jul 24 20:10:25 2024 00:30:38.629 read: IOPS=511, BW=2046KiB/s (2095kB/s)(20.0MiB/10018msec) 00:30:38.629 slat (usec): min=5, max=107, avg=17.77, stdev=14.73 00:30:38.629 clat (usec): min=14155, max=56054, avg=31140.61, stdev=3188.89 00:30:38.629 lat (usec): min=14163, max=56063, avg=31158.38, stdev=3190.17 00:30:38.629 clat percentiles (usec): 00:30:38.629 | 1.00th=[19792], 5.00th=[23200], 10.00th=[29492], 20.00th=[30540], 00:30:38.629 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31589], 60.00th=[31851], 00:30:38.629 | 70.00th=[32113], 80.00th=[32637], 90.00th=[33162], 95.00th=[33817], 00:30:38.629 | 99.00th=[39584], 99.50th=[42206], 99.90th=[52691], 99.95th=[52691], 00:30:38.629 | 99.99th=[55837] 00:30:38.629 bw ( KiB/s): min= 1920, max= 2352, per=4.30%, avg=2042.20, stdev=118.02, samples=20 00:30:38.629 iops : min= 480, max= 588, avg=510.40, stdev=29.51, samples=20 00:30:38.629 lat (msec) : 20=1.48%, 50=98.24%, 100=0.27% 00:30:38.629 cpu : usr=97.55%, sys=1.55%, ctx=39, majf=0, minf=37 00:30:38.629 IO depths : 1=5.2%, 2=10.5%, 4=21.8%, 8=54.8%, 16=7.7%, 32=0.0%, >=64=0.0% 00:30:38.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.629 complete : 0=0.0%, 4=93.4%, 8=1.2%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.629 issued rwts: total=5124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:38.629 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:38.629 filename2: (groupid=0, jobs=1): err= 0: pid=3884809: Wed Jul 24 20:10:25 2024 00:30:38.629 read: IOPS=488, BW=1956KiB/s (2003kB/s)(19.1MiB/10005msec) 00:30:38.630 slat (nsec): min=5388, max=97855, avg=15639.82, stdev=13966.86 00:30:38.630 clat (usec): min=5901, max=62587, avg=32632.99, stdev=4445.71 00:30:38.630 lat (usec): min=5907, max=62602, avg=32648.63, stdev=4445.73 00:30:38.630 clat percentiles (usec): 00:30:38.630 | 1.00th=[19530], 5.00th=[29754], 10.00th=[30540], 20.00th=[31065], 00:30:38.630 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:30:38.630 | 70.00th=[32637], 80.00th=[33162], 90.00th=[35914], 95.00th=[41157], 00:30:38.630 | 99.00th=[50594], 99.50th=[53216], 99.90th=[62653], 99.95th=[62653], 00:30:38.630 | 99.99th=[62653] 00:30:38.630 bw ( KiB/s): min= 1792, max= 2048, per=4.09%, avg=1944.95, stdev=80.22, samples=19 00:30:38.630 iops : min= 448, max= 512, avg=486.16, stdev=20.08, samples=19 00:30:38.630 lat (msec) : 10=0.12%, 20=1.00%, 50=97.81%, 100=1.06% 00:30:38.630 cpu : usr=97.68%, sys=1.44%, ctx=72, majf=0, minf=45 00:30:38.630 IO depths : 1=1.3%, 2=2.7%, 4=8.0%, 8=73.9%, 16=14.1%, 32=0.0%, >=64=0.0% 00:30:38.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.630 complete : 0=0.0%, 4=90.6%, 8=6.3%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.630 issued rwts: total=4892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:38.630 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:38.630 filename2: (groupid=0, jobs=1): err= 0: pid=3884811: Wed Jul 24 20:10:25 2024 00:30:38.630 read: IOPS=492, BW=1970KiB/s (2017kB/s)(19.2MiB/10004msec) 00:30:38.630 slat (usec): min=5, max=113, avg=16.87, stdev=14.22 00:30:38.630 clat (usec): min=5651, max=62504, avg=32372.12, stdev=6601.01 00:30:38.630 lat (usec): min=5657, max=62518, avg=32388.99, stdev=6601.75 00:30:38.630 clat percentiles (usec): 00:30:38.630 | 1.00th=[16057], 5.00th=[21103], 10.00th=[23725], 20.00th=[30016], 00:30:38.630 | 30.00th=[30802], 40.00th=[31589], 50.00th=[31851], 60.00th=[32375], 00:30:38.630 | 70.00th=[32900], 80.00th=[34866], 90.00th=[42206], 95.00th=[44303], 00:30:38.630 | 99.00th=[50594], 99.50th=[54264], 99.90th=[62653], 99.95th=[62653], 00:30:38.630 | 99.99th=[62653] 00:30:38.630 bw ( KiB/s): min= 1763, max= 2208, per=4.11%, avg=1953.42, stdev=122.17, samples=19 00:30:38.630 iops : min= 440, max= 552, avg=488.32, stdev=30.61, samples=19 00:30:38.630 lat (msec) : 10=0.24%, 20=3.53%, 50=95.13%, 100=1.10% 00:30:38.630 cpu : usr=97.95%, sys=1.10%, ctx=691, majf=0, minf=36 00:30:38.630 IO depths : 1=1.6%, 2=5.4%, 4=16.9%, 8=64.4%, 16=11.7%, 32=0.0%, >=64=0.0% 00:30:38.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.630 complete : 0=0.0%, 4=92.2%, 8=3.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:38.630 issued rwts: total=4926,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:38.630 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:38.630 00:30:38.630 Run status group 0 (all jobs): 00:30:38.630 READ: bw=46.4MiB/s (48.7MB/s), 1841KiB/s-2177KiB/s (1885kB/s-2229kB/s), io=465MiB (488MB), run=10001-10030msec 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:38.630 bdev_null0 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:38.630 [2024-07-24 20:10:25.256030] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:38.630 bdev_null1 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:38.630 20:10:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:38.631 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:38.631 20:10:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:38.631 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:38.631 20:10:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:38.631 20:10:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:38.631 { 00:30:38.631 "params": { 00:30:38.631 "name": "Nvme$subsystem", 00:30:38.631 "trtype": "$TEST_TRANSPORT", 00:30:38.631 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:38.631 "adrfam": "ipv4", 00:30:38.631 "trsvcid": "$NVMF_PORT", 00:30:38.631 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:38.631 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:38.631 "hdgst": ${hdgst:-false}, 00:30:38.631 "ddgst": ${ddgst:-false} 00:30:38.631 }, 00:30:38.631 "method": "bdev_nvme_attach_controller" 00:30:38.631 } 00:30:38.631 EOF 00:30:38.631 )") 00:30:38.631 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:38.631 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:38.631 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:38.631 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:38.631 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:38.631 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:38.631 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:38.631 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:38.631 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:38.631 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:38.631 20:10:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:38.631 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:38.631 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:38.631 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:38.631 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:38.631 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:38.631 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:38.631 20:10:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:38.631 20:10:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:38.631 { 00:30:38.631 "params": { 00:30:38.631 "name": "Nvme$subsystem", 00:30:38.631 "trtype": "$TEST_TRANSPORT", 00:30:38.631 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:38.631 "adrfam": "ipv4", 00:30:38.631 "trsvcid": "$NVMF_PORT", 00:30:38.631 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:38.631 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:38.631 "hdgst": ${hdgst:-false}, 00:30:38.631 "ddgst": ${ddgst:-false} 00:30:38.631 }, 00:30:38.631 "method": "bdev_nvme_attach_controller" 00:30:38.631 } 00:30:38.631 EOF 00:30:38.631 )") 00:30:38.631 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:38.631 20:10:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:38.631 20:10:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:38.631 20:10:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:38.631 20:10:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:38.631 20:10:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:38.631 "params": { 00:30:38.631 "name": "Nvme0", 00:30:38.631 "trtype": "tcp", 00:30:38.631 "traddr": "10.0.0.2", 00:30:38.631 "adrfam": "ipv4", 00:30:38.631 "trsvcid": "4420", 00:30:38.631 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:38.631 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:38.631 "hdgst": false, 00:30:38.631 "ddgst": false 00:30:38.631 }, 00:30:38.631 "method": "bdev_nvme_attach_controller" 00:30:38.631 },{ 00:30:38.631 "params": { 00:30:38.631 "name": "Nvme1", 00:30:38.631 "trtype": "tcp", 00:30:38.631 "traddr": "10.0.0.2", 00:30:38.631 "adrfam": "ipv4", 00:30:38.631 "trsvcid": "4420", 00:30:38.631 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:38.631 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:38.631 "hdgst": false, 00:30:38.631 "ddgst": false 00:30:38.631 }, 00:30:38.631 "method": "bdev_nvme_attach_controller" 00:30:38.631 }' 00:30:38.631 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:38.631 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:38.631 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:38.631 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:38.631 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:38.631 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:38.631 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:38.631 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:38.631 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:38.631 20:10:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:38.631 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:38.631 ... 00:30:38.631 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:38.631 ... 00:30:38.631 fio-3.35 00:30:38.631 Starting 4 threads 00:30:38.631 EAL: No free 2048 kB hugepages reported on node 1 00:30:43.918 00:30:43.918 filename0: (groupid=0, jobs=1): err= 0: pid=3887084: Wed Jul 24 20:10:31 2024 00:30:43.918 read: IOPS=2045, BW=16.0MiB/s (16.8MB/s)(80.0MiB/5003msec) 00:30:43.918 slat (nsec): min=5362, max=26036, avg=6101.97, stdev=1934.34 00:30:43.918 clat (usec): min=1599, max=6324, avg=3893.61, stdev=664.00 00:30:43.918 lat (usec): min=1605, max=6330, avg=3899.71, stdev=663.93 00:30:43.918 clat percentiles (usec): 00:30:43.918 | 1.00th=[ 2442], 5.00th=[ 2802], 10.00th=[ 3032], 20.00th=[ 3294], 00:30:43.918 | 30.00th=[ 3523], 40.00th=[ 3720], 50.00th=[ 3916], 60.00th=[ 4080], 00:30:43.918 | 70.00th=[ 4228], 80.00th=[ 4424], 90.00th=[ 4752], 95.00th=[ 5014], 00:30:43.918 | 99.00th=[ 5473], 99.50th=[ 5735], 99.90th=[ 6194], 99.95th=[ 6259], 00:30:43.918 | 99.99th=[ 6325] 00:30:43.918 bw ( KiB/s): min=15824, max=16768, per=25.53%, avg=16369.60, stdev=273.35, samples=10 00:30:43.918 iops : min= 1978, max= 2096, avg=2046.20, stdev=34.17, samples=10 00:30:43.918 lat (msec) : 2=0.11%, 4=55.76%, 10=44.13% 00:30:43.918 cpu : usr=96.90%, sys=2.84%, ctx=9, majf=0, minf=39 00:30:43.918 IO depths : 1=0.1%, 2=2.0%, 4=66.9%, 8=30.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:43.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.918 complete : 0=0.0%, 4=94.2%, 8=5.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.918 issued rwts: total=10236,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.918 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:43.918 filename0: (groupid=0, jobs=1): err= 0: pid=3887085: Wed Jul 24 20:10:31 2024 00:30:43.918 read: IOPS=2019, BW=15.8MiB/s (16.5MB/s)(78.9MiB/5001msec) 00:30:43.918 slat (nsec): min=5355, max=39436, avg=6210.13, stdev=2489.35 00:30:43.918 clat (usec): min=1571, max=6598, avg=3945.18, stdev=680.77 00:30:43.918 lat (usec): min=1577, max=6604, avg=3951.39, stdev=680.67 00:30:43.918 clat percentiles (usec): 00:30:43.918 | 1.00th=[ 2474], 5.00th=[ 2835], 10.00th=[ 3064], 20.00th=[ 3359], 00:30:43.918 | 30.00th=[ 3589], 40.00th=[ 3785], 50.00th=[ 3916], 60.00th=[ 4080], 00:30:43.918 | 70.00th=[ 4293], 80.00th=[ 4490], 90.00th=[ 4817], 95.00th=[ 5145], 00:30:43.918 | 99.00th=[ 5538], 99.50th=[ 5800], 99.90th=[ 6128], 99.95th=[ 6259], 00:30:43.918 | 99.99th=[ 6587] 00:30:43.918 bw ( KiB/s): min=15856, max=16368, per=25.18%, avg=16145.78, stdev=169.22, samples=9 00:30:43.918 iops : min= 1982, max= 2046, avg=2018.22, stdev=21.15, samples=9 00:30:43.918 lat (msec) : 2=0.16%, 4=54.29%, 10=45.55% 00:30:43.918 cpu : usr=97.38%, sys=2.34%, ctx=3, majf=0, minf=125 00:30:43.918 IO depths : 1=0.2%, 2=1.9%, 4=66.6%, 8=31.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:43.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.918 complete : 0=0.0%, 4=94.6%, 8=5.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.918 issued rwts: total=10099,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.918 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:43.918 filename1: (groupid=0, jobs=1): err= 0: pid=3887086: Wed Jul 24 20:10:31 2024 00:30:43.918 read: IOPS=1965, BW=15.4MiB/s (16.1MB/s)(76.8MiB/5002msec) 00:30:43.918 slat (nsec): min=5355, max=40444, avg=6135.31, stdev=2357.40 00:30:43.918 clat (usec): min=1780, max=45832, avg=4053.19, stdev=1378.46 00:30:43.918 lat (usec): min=1786, max=45860, avg=4059.32, stdev=1378.62 00:30:43.918 clat percentiles (usec): 00:30:43.918 | 1.00th=[ 2540], 5.00th=[ 2900], 10.00th=[ 3130], 20.00th=[ 3425], 00:30:43.918 | 30.00th=[ 3654], 40.00th=[ 3851], 50.00th=[ 4015], 60.00th=[ 4178], 00:30:43.918 | 70.00th=[ 4359], 80.00th=[ 4555], 90.00th=[ 4883], 95.00th=[ 5211], 00:30:43.918 | 99.00th=[ 5800], 99.50th=[ 6063], 99.90th=[ 6521], 99.95th=[45876], 00:30:43.918 | 99.99th=[45876] 00:30:43.918 bw ( KiB/s): min=14188, max=16176, per=24.53%, avg=15727.60, stdev=597.05, samples=10 00:30:43.918 iops : min= 1773, max= 2022, avg=1965.90, stdev=74.77, samples=10 00:30:43.918 lat (msec) : 2=0.08%, 4=49.04%, 10=50.80%, 50=0.08% 00:30:43.918 cpu : usr=97.22%, sys=2.54%, ctx=8, majf=0, minf=111 00:30:43.918 IO depths : 1=0.2%, 2=1.4%, 4=67.7%, 8=30.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:43.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.918 complete : 0=0.0%, 4=94.2%, 8=5.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.918 issued rwts: total=9833,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.918 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:43.918 filename1: (groupid=0, jobs=1): err= 0: pid=3887087: Wed Jul 24 20:10:31 2024 00:30:43.918 read: IOPS=1985, BW=15.5MiB/s (16.3MB/s)(77.6MiB/5002msec) 00:30:43.918 slat (nsec): min=7808, max=28924, avg=8515.99, stdev=1675.87 00:30:43.918 clat (usec): min=1617, max=47231, avg=4007.53, stdev=1384.46 00:30:43.918 lat (usec): min=1628, max=47260, avg=4016.04, stdev=1384.57 00:30:43.918 clat percentiles (usec): 00:30:43.918 | 1.00th=[ 2606], 5.00th=[ 2933], 10.00th=[ 3163], 20.00th=[ 3425], 00:30:43.918 | 30.00th=[ 3621], 40.00th=[ 3818], 50.00th=[ 3949], 60.00th=[ 4146], 00:30:43.918 | 70.00th=[ 4293], 80.00th=[ 4490], 90.00th=[ 4817], 95.00th=[ 5080], 00:30:43.918 | 99.00th=[ 5604], 99.50th=[ 5800], 99.90th=[ 6456], 99.95th=[46924], 00:30:43.918 | 99.99th=[47449] 00:30:43.918 bw ( KiB/s): min=14669, max=16176, per=24.72%, avg=15850.33, stdev=452.96, samples=9 00:30:43.918 iops : min= 1833, max= 2022, avg=1981.22, stdev=56.82, samples=9 00:30:43.918 lat (msec) : 2=0.04%, 4=52.38%, 10=47.50%, 50=0.08% 00:30:43.918 cpu : usr=97.16%, sys=2.54%, ctx=7, majf=0, minf=67 00:30:43.918 IO depths : 1=0.2%, 2=1.4%, 4=66.9%, 8=31.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:43.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.918 complete : 0=0.0%, 4=94.9%, 8=5.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.918 issued rwts: total=9930,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.918 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:43.918 00:30:43.918 Run status group 0 (all jobs): 00:30:43.918 READ: bw=62.6MiB/s (65.7MB/s), 15.4MiB/s-16.0MiB/s (16.1MB/s-16.8MB/s), io=313MiB (328MB), run=5001-5003msec 00:30:43.918 20:10:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:30:43.918 20:10:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:43.918 20:10:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:43.918 20:10:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:43.918 20:10:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:43.918 20:10:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:43.919 20:10:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.919 20:10:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:43.919 20:10:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.919 20:10:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:43.919 20:10:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.919 20:10:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:43.919 20:10:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.919 20:10:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:43.919 20:10:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:43.919 20:10:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:43.919 20:10:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:43.919 20:10:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.919 20:10:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:43.919 20:10:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.919 20:10:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:43.919 20:10:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.919 20:10:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:43.919 20:10:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.919 00:30:43.919 real 0m24.465s 00:30:43.919 user 5m23.699s 00:30:43.919 sys 0m3.937s 00:30:43.919 20:10:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:43.919 20:10:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:43.919 ************************************ 00:30:43.919 END TEST fio_dif_rand_params 00:30:43.919 ************************************ 00:30:43.919 20:10:31 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:30:43.919 20:10:31 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:43.919 20:10:31 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:43.919 20:10:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:43.919 ************************************ 00:30:43.919 START TEST fio_dif_digest 00:30:43.919 ************************************ 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:43.919 bdev_null0 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:43.919 [2024-07-24 20:10:31.700722] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:43.919 { 00:30:43.919 "params": { 00:30:43.919 "name": "Nvme$subsystem", 00:30:43.919 "trtype": "$TEST_TRANSPORT", 00:30:43.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:43.919 "adrfam": "ipv4", 00:30:43.919 "trsvcid": "$NVMF_PORT", 00:30:43.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:43.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:43.919 "hdgst": ${hdgst:-false}, 00:30:43.919 "ddgst": ${ddgst:-false} 00:30:43.919 }, 00:30:43.919 "method": "bdev_nvme_attach_controller" 00:30:43.919 } 00:30:43.919 EOF 00:30:43.919 )") 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:43.919 "params": { 00:30:43.919 "name": "Nvme0", 00:30:43.919 "trtype": "tcp", 00:30:43.919 "traddr": "10.0.0.2", 00:30:43.919 "adrfam": "ipv4", 00:30:43.919 "trsvcid": "4420", 00:30:43.919 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:43.919 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:43.919 "hdgst": true, 00:30:43.919 "ddgst": true 00:30:43.919 }, 00:30:43.919 "method": "bdev_nvme_attach_controller" 00:30:43.919 }' 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:43.919 20:10:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:44.179 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:44.179 ... 00:30:44.179 fio-3.35 00:30:44.179 Starting 3 threads 00:30:44.438 EAL: No free 2048 kB hugepages reported on node 1 00:30:56.693 00:30:56.693 filename0: (groupid=0, jobs=1): err= 0: pid=3888496: Wed Jul 24 20:10:42 2024 00:30:56.693 read: IOPS=122, BW=15.3MiB/s (16.0MB/s)(153MiB/10027msec) 00:30:56.693 slat (nsec): min=5750, max=54181, avg=9046.55, stdev=2262.64 00:30:56.693 clat (usec): min=7194, max=97591, avg=24560.26, stdev=20271.15 00:30:56.693 lat (usec): min=7203, max=97600, avg=24569.31, stdev=20271.13 00:30:56.693 clat percentiles (usec): 00:30:56.693 | 1.00th=[ 8586], 5.00th=[ 9765], 10.00th=[10945], 20.00th=[12125], 00:30:56.693 | 30.00th=[12911], 40.00th=[13566], 50.00th=[14353], 60.00th=[15008], 00:30:56.693 | 70.00th=[16057], 80.00th=[53216], 90.00th=[55313], 95.00th=[56361], 00:30:56.693 | 99.00th=[94897], 99.50th=[95945], 99.90th=[96994], 99.95th=[98042], 00:30:56.693 | 99.99th=[98042] 00:30:56.694 bw ( KiB/s): min= 8704, max=25088, per=28.20%, avg=15628.80, stdev=3885.52, samples=20 00:30:56.694 iops : min= 68, max= 196, avg=122.10, stdev=30.36, samples=20 00:30:56.694 lat (msec) : 10=5.80%, 20=68.79%, 50=0.08%, 100=25.33% 00:30:56.694 cpu : usr=96.42%, sys=3.32%, ctx=16, majf=0, minf=148 00:30:56.694 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:56.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.694 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.694 issued rwts: total=1224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:56.694 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:56.694 filename0: (groupid=0, jobs=1): err= 0: pid=3888497: Wed Jul 24 20:10:42 2024 00:30:56.694 read: IOPS=176, BW=22.0MiB/s (23.1MB/s)(221MiB/10024msec) 00:30:56.694 slat (nsec): min=5635, max=36905, avg=6684.48, stdev=1255.61 00:30:56.694 clat (usec): min=5752, max=99277, avg=16998.02, stdev=15544.12 00:30:56.694 lat (usec): min=5758, max=99285, avg=17004.71, stdev=15544.15 00:30:56.694 clat percentiles (usec): 00:30:56.694 | 1.00th=[ 6456], 5.00th=[ 7046], 10.00th=[ 7701], 20.00th=[ 9241], 00:30:56.694 | 30.00th=[10290], 40.00th=[11338], 50.00th=[12256], 60.00th=[13435], 00:30:56.694 | 70.00th=[14484], 80.00th=[15664], 90.00th=[51119], 95.00th=[54789], 00:30:56.694 | 99.00th=[93848], 99.50th=[94897], 99.90th=[95945], 99.95th=[99091], 00:30:56.694 | 99.99th=[99091] 00:30:56.694 bw ( KiB/s): min=16128, max=33024, per=40.77%, avg=22592.00, stdev=4003.05, samples=20 00:30:56.694 iops : min= 126, max= 258, avg=176.50, stdev=31.27, samples=20 00:30:56.694 lat (msec) : 10=26.92%, 20=61.65%, 50=0.62%, 100=10.80% 00:30:56.694 cpu : usr=96.29%, sys=3.26%, ctx=377, majf=0, minf=158 00:30:56.694 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:56.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.694 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.694 issued rwts: total=1768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:56.694 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:56.694 filename0: (groupid=0, jobs=1): err= 0: pid=3888499: Wed Jul 24 20:10:42 2024 00:30:56.694 read: IOPS=134, BW=16.9MiB/s (17.7MB/s)(169MiB/10040msec) 00:30:56.694 slat (nsec): min=5965, max=61415, avg=7366.56, stdev=2438.97 00:30:56.694 clat (usec): min=7657, max=99526, avg=22215.88, stdev=16750.56 00:30:56.694 lat (usec): min=7664, max=99531, avg=22223.24, stdev=16750.43 00:30:56.694 clat percentiles (usec): 00:30:56.694 | 1.00th=[ 8455], 5.00th=[ 9896], 10.00th=[10814], 20.00th=[12125], 00:30:56.694 | 30.00th=[13304], 40.00th=[14353], 50.00th=[15270], 60.00th=[16188], 00:30:56.694 | 70.00th=[17433], 80.00th=[20579], 90.00th=[54264], 95.00th=[56361], 00:30:56.694 | 99.00th=[59507], 99.50th=[60031], 99.90th=[98042], 99.95th=[99091], 00:30:56.694 | 99.99th=[99091] 00:30:56.694 bw ( KiB/s): min=11520, max=22528, per=31.23%, avg=17305.60, stdev=2985.91, samples=20 00:30:56.694 iops : min= 90, max= 176, avg=135.20, stdev=23.33, samples=20 00:30:56.694 lat (msec) : 10=5.54%, 20=74.02%, 50=1.40%, 100=19.04% 00:30:56.694 cpu : usr=96.98%, sys=2.78%, ctx=22, majf=0, minf=247 00:30:56.694 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:56.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.694 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.694 issued rwts: total=1355,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:56.694 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:56.694 00:30:56.694 Run status group 0 (all jobs): 00:30:56.694 READ: bw=54.1MiB/s (56.7MB/s), 15.3MiB/s-22.0MiB/s (16.0MB/s-23.1MB/s), io=543MiB (570MB), run=10024-10040msec 00:30:56.694 20:10:42 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:30:56.694 20:10:42 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:30:56.694 20:10:42 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:30:56.694 20:10:42 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:56.694 20:10:42 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:30:56.694 20:10:42 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:56.694 20:10:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.694 20:10:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:56.694 20:10:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.694 20:10:42 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:56.694 20:10:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.694 20:10:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:56.694 20:10:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.694 00:30:56.694 real 0m11.140s 00:30:56.694 user 0m41.386s 00:30:56.694 sys 0m1.277s 00:30:56.694 20:10:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:56.694 20:10:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:56.694 ************************************ 00:30:56.694 END TEST fio_dif_digest 00:30:56.694 ************************************ 00:30:56.694 20:10:42 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:30:56.694 20:10:42 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:30:56.694 20:10:42 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:56.694 20:10:42 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:30:56.694 20:10:42 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:56.694 20:10:42 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:30:56.694 20:10:42 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:56.694 20:10:42 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:56.694 rmmod nvme_tcp 00:30:56.694 rmmod nvme_fabrics 00:30:56.694 rmmod nvme_keyring 00:30:56.694 20:10:42 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:56.694 20:10:42 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:30:56.694 20:10:42 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:30:56.694 20:10:42 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3878132 ']' 00:30:56.694 20:10:42 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3878132 00:30:56.694 20:10:42 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 3878132 ']' 00:30:56.694 20:10:42 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 3878132 00:30:56.694 20:10:42 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:30:56.694 20:10:42 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:56.694 20:10:42 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3878132 00:30:56.694 20:10:42 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:56.694 20:10:42 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:56.694 20:10:42 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3878132' 00:30:56.694 killing process with pid 3878132 00:30:56.694 20:10:42 nvmf_dif -- common/autotest_common.sh@969 -- # kill 3878132 00:30:56.694 20:10:42 nvmf_dif -- common/autotest_common.sh@974 -- # wait 3878132 00:30:56.694 20:10:43 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:56.694 20:10:43 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:58.607 Waiting for block devices as requested 00:30:58.607 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:30:58.607 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:30:58.868 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:30:58.868 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:30:58.868 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:30:59.127 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:30:59.127 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:30:59.127 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:30:59.387 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:30:59.387 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:30:59.647 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:30:59.647 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:30:59.647 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:30:59.647 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:30:59.908 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:30:59.908 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:30:59.908 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:00.169 20:10:48 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:00.169 20:10:48 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:00.169 20:10:48 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:00.169 20:10:48 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:00.169 20:10:48 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:00.169 20:10:48 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:00.169 20:10:48 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:02.712 20:10:50 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:02.712 00:31:02.712 real 1m17.148s 00:31:02.712 user 8m9.192s 00:31:02.712 sys 0m18.947s 00:31:02.712 20:10:50 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:02.712 20:10:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:02.712 ************************************ 00:31:02.712 END TEST nvmf_dif 00:31:02.712 ************************************ 00:31:02.712 20:10:50 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:02.712 20:10:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:02.712 20:10:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:02.712 20:10:50 -- common/autotest_common.sh@10 -- # set +x 00:31:02.712 ************************************ 00:31:02.712 START TEST nvmf_abort_qd_sizes 00:31:02.712 ************************************ 00:31:02.712 20:10:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:02.712 * Looking for test storage... 00:31:02.712 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:02.712 20:10:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:02.712 20:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:31:02.712 20:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:02.712 20:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:02.712 20:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:02.712 20:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:02.712 20:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:02.712 20:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:02.712 20:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:02.712 20:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:02.712 20:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:02.712 20:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:02.712 20:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:02.712 20:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:02.712 20:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:02.712 20:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:02.712 20:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:02.712 20:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:02.713 20:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:02.713 20:10:50 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:02.713 20:10:50 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:02.713 20:10:50 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:02.713 20:10:50 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.713 20:10:50 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.713 20:10:50 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.713 20:10:50 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:31:02.713 20:10:50 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.713 20:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:31:02.713 20:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:02.713 20:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:02.713 20:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:02.713 20:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:02.713 20:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:02.713 20:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:02.713 20:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:02.713 20:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:02.713 20:10:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:31:02.713 20:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:02.713 20:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:02.713 20:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:02.713 20:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:02.713 20:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:02.713 20:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:02.713 20:10:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:02.713 20:10:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:02.713 20:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:02.713 20:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:02.713 20:10:50 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:31:02.713 20:10:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:09.307 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:09.307 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:09.307 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:09.307 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:09.307 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:09.308 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:09.308 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:09.308 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:09.308 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:09.308 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:09.308 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:09.308 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:09.308 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:09.308 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:09.308 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:09.308 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:09.570 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:09.570 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:09.570 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:09.570 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:09.570 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:09.570 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:09.570 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:09.570 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:09.570 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.477 ms 00:31:09.570 00:31:09.570 --- 10.0.0.2 ping statistics --- 00:31:09.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:09.570 rtt min/avg/max/mdev = 0.477/0.477/0.477/0.000 ms 00:31:09.570 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:09.570 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:09.570 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.431 ms 00:31:09.570 00:31:09.570 --- 10.0.0.1 ping statistics --- 00:31:09.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:09.570 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:31:09.570 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:09.570 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:31:09.570 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:09.570 20:10:57 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:12.878 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:12.878 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:12.878 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:12.878 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:12.878 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:13.138 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:13.138 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:13.138 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:13.138 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:13.138 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:13.138 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:13.138 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:13.138 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:13.138 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:13.138 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:13.138 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:13.138 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:31:13.398 20:11:01 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:13.398 20:11:01 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:13.398 20:11:01 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:13.398 20:11:01 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:13.398 20:11:01 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:13.398 20:11:01 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:13.661 20:11:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:31:13.661 20:11:01 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:13.661 20:11:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:13.661 20:11:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:13.661 20:11:01 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3897924 00:31:13.661 20:11:01 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3897924 00:31:13.661 20:11:01 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:13.661 20:11:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 3897924 ']' 00:31:13.661 20:11:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:13.661 20:11:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:13.661 20:11:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:13.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:13.661 20:11:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:13.661 20:11:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:13.661 [2024-07-24 20:11:01.448494] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:31:13.661 [2024-07-24 20:11:01.448557] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:13.661 EAL: No free 2048 kB hugepages reported on node 1 00:31:13.661 [2024-07-24 20:11:01.522236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:13.661 [2024-07-24 20:11:01.599909] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:13.661 [2024-07-24 20:11:01.599951] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:13.661 [2024-07-24 20:11:01.599959] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:13.661 [2024-07-24 20:11:01.599966] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:13.661 [2024-07-24 20:11:01.599971] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:13.661 [2024-07-24 20:11:01.600139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:13.661 [2024-07-24 20:11:01.600257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:13.661 [2024-07-24 20:11:01.600422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:13.662 [2024-07-24 20:11:01.600423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:14.602 20:11:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:14.602 20:11:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:31:14.602 20:11:02 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:14.602 20:11:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:14.602 20:11:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:14.602 20:11:02 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:14.602 20:11:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:14.602 20:11:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:31:14.602 20:11:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:31:14.602 20:11:02 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:31:14.602 20:11:02 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:31:14.602 20:11:02 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:31:14.602 20:11:02 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:31:14.602 20:11:02 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:31:14.602 20:11:02 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:31:14.602 20:11:02 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:31:14.602 20:11:02 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:31:14.602 20:11:02 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:31:14.602 20:11:02 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:31:14.602 20:11:02 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:31:14.602 20:11:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:31:14.602 20:11:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:31:14.602 20:11:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:31:14.602 20:11:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:14.602 20:11:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:14.602 20:11:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:14.602 ************************************ 00:31:14.602 START TEST spdk_target_abort 00:31:14.602 ************************************ 00:31:14.602 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:31:14.602 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:14.602 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:31:14.602 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.602 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:14.863 spdk_targetn1 00:31:14.863 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.863 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:14.863 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.863 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:14.863 [2024-07-24 20:11:02.629172] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:14.863 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.863 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:31:14.863 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.863 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:14.863 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.863 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:31:14.863 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.863 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:14.863 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.863 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:31:14.863 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.863 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:14.863 [2024-07-24 20:11:02.669423] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:14.863 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.863 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:31:14.863 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:14.863 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:14.863 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:14.863 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:14.863 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:14.863 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:14.863 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:14.863 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:14.863 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:14.863 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:14.863 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:14.863 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:14.863 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:14.863 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:14.863 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:14.863 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:14.863 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:14.863 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:14.863 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:14.863 20:11:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:14.863 EAL: No free 2048 kB hugepages reported on node 1 00:31:15.124 [2024-07-24 20:11:02.842406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:472 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:31:15.124 [2024-07-24 20:11:02.842434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:003c p:1 m:0 dnr:0 00:31:15.124 [2024-07-24 20:11:02.905633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2176 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:31:15.124 [2024-07-24 20:11:02.905652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:18.422 Initializing NVMe Controllers 00:31:18.422 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:18.422 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:18.422 Initialization complete. Launching workers. 00:31:18.422 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9130, failed: 2 00:31:18.422 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2855, failed to submit 6277 00:31:18.422 success 761, unsuccess 2094, failed 0 00:31:18.422 20:11:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:18.422 20:11:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:18.422 EAL: No free 2048 kB hugepages reported on node 1 00:31:18.422 [2024-07-24 20:11:05.975400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:183 nsid:1 lba:480 len:8 PRP1 0x200007c3a000 PRP2 0x0 00:31:18.422 [2024-07-24 20:11:05.975432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:183 cdw0:0 sqhd:0048 p:1 m:0 dnr:0 00:31:18.422 [2024-07-24 20:11:06.072316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:182 nsid:1 lba:2744 len:8 PRP1 0x200007c42000 PRP2 0x0 00:31:18.422 [2024-07-24 20:11:06.072342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:182 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:18.683 [2024-07-24 20:11:06.552259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:185 nsid:1 lba:14176 len:8 PRP1 0x200007c3e000 PRP2 0x0 00:31:18.683 [2024-07-24 20:11:06.552290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:185 cdw0:0 sqhd:00f1 p:1 m:0 dnr:0 00:31:19.274 [2024-07-24 20:11:07.070436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:180 nsid:1 lba:26528 len:8 PRP1 0x200007c46000 PRP2 0x0 00:31:19.274 [2024-07-24 20:11:07.070465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:180 cdw0:0 sqhd:00f5 p:1 m:0 dnr:0 00:31:20.240 [2024-07-24 20:11:07.895281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:175 nsid:1 lba:46088 len:8 PRP1 0x200007c60000 PRP2 0x0 00:31:20.240 [2024-07-24 20:11:07.895310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:175 cdw0:0 sqhd:0084 p:1 m:0 dnr:0 00:31:21.183 Initializing NVMe Controllers 00:31:21.183 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:21.183 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:21.183 Initialization complete. Launching workers. 00:31:21.183 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8972, failed: 5 00:31:21.183 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1219, failed to submit 7758 00:31:21.183 success 372, unsuccess 847, failed 0 00:31:21.183 20:11:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:21.183 20:11:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:21.444 EAL: No free 2048 kB hugepages reported on node 1 00:31:23.357 [2024-07-24 20:11:10.868498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:187 nsid:1 lba:175776 len:8 PRP1 0x200007912000 PRP2 0x0 00:31:23.357 [2024-07-24 20:11:10.868534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:187 cdw0:0 sqhd:0087 p:0 m:0 dnr:0 00:31:24.740 Initializing NVMe Controllers 00:31:24.740 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:24.740 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:24.740 Initialization complete. Launching workers. 00:31:24.740 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 41003, failed: 1 00:31:24.740 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2729, failed to submit 38275 00:31:24.740 success 663, unsuccess 2066, failed 0 00:31:24.740 20:11:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:31:24.740 20:11:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.740 20:11:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:24.740 20:11:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.740 20:11:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:31:24.740 20:11:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.740 20:11:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:26.652 20:11:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.652 20:11:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3897924 00:31:26.652 20:11:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 3897924 ']' 00:31:26.652 20:11:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 3897924 00:31:26.652 20:11:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:31:26.652 20:11:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:26.652 20:11:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3897924 00:31:26.652 20:11:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:26.652 20:11:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:26.652 20:11:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3897924' 00:31:26.652 killing process with pid 3897924 00:31:26.652 20:11:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 3897924 00:31:26.652 20:11:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 3897924 00:31:26.652 00:31:26.652 real 0m12.011s 00:31:26.652 user 0m48.427s 00:31:26.652 sys 0m2.081s 00:31:26.652 20:11:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:26.652 20:11:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:26.652 ************************************ 00:31:26.652 END TEST spdk_target_abort 00:31:26.652 ************************************ 00:31:26.652 20:11:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:31:26.652 20:11:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:26.652 20:11:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:26.652 20:11:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:26.652 ************************************ 00:31:26.652 START TEST kernel_target_abort 00:31:26.652 ************************************ 00:31:26.652 20:11:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:31:26.652 20:11:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:31:26.652 20:11:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:31:26.652 20:11:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:26.652 20:11:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:26.652 20:11:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:26.652 20:11:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:26.652 20:11:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:26.652 20:11:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:26.652 20:11:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:26.652 20:11:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:26.652 20:11:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:26.652 20:11:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:26.652 20:11:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:26.652 20:11:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:26.652 20:11:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:26.652 20:11:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:26.652 20:11:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:26.652 20:11:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:31:26.652 20:11:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:26.652 20:11:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:26.652 20:11:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:26.652 20:11:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:29.953 Waiting for block devices as requested 00:31:29.953 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:29.953 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:29.953 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:29.953 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:29.953 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:29.953 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:30.214 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:30.214 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:30.214 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:30.474 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:30.474 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:30.735 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:30.735 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:30.735 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:30.735 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:30.995 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:30.995 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:31.256 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:31.256 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:31.256 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:31.256 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:31.256 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:31.256 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:31.256 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:31.256 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:31.256 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:31.256 No valid GPT data, bailing 00:31:31.256 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:31.256 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:31:31.256 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:31:31.256 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:31.256 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:31.256 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:31.256 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:31.256 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:31.256 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:31.256 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:31:31.256 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:31.256 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:31:31.256 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:31.256 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:31:31.256 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:31:31.256 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:31:31.256 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:31.256 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:31:31.517 00:31:31.517 Discovery Log Number of Records 2, Generation counter 2 00:31:31.517 =====Discovery Log Entry 0====== 00:31:31.517 trtype: tcp 00:31:31.517 adrfam: ipv4 00:31:31.517 subtype: current discovery subsystem 00:31:31.517 treq: not specified, sq flow control disable supported 00:31:31.517 portid: 1 00:31:31.517 trsvcid: 4420 00:31:31.517 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:31.517 traddr: 10.0.0.1 00:31:31.517 eflags: none 00:31:31.517 sectype: none 00:31:31.517 =====Discovery Log Entry 1====== 00:31:31.517 trtype: tcp 00:31:31.517 adrfam: ipv4 00:31:31.517 subtype: nvme subsystem 00:31:31.517 treq: not specified, sq flow control disable supported 00:31:31.517 portid: 1 00:31:31.517 trsvcid: 4420 00:31:31.517 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:31.517 traddr: 10.0.0.1 00:31:31.517 eflags: none 00:31:31.517 sectype: none 00:31:31.517 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:31:31.517 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:31.517 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:31.517 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:31:31.517 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:31.517 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:31.517 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:31.517 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:31.517 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:31.517 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:31.517 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:31.517 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:31.517 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:31.517 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:31.517 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:31:31.517 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:31.517 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:31:31.517 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:31.517 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:31.517 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:31.517 20:11:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:31.517 EAL: No free 2048 kB hugepages reported on node 1 00:31:34.824 Initializing NVMe Controllers 00:31:34.824 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:34.824 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:34.824 Initialization complete. Launching workers. 00:31:34.824 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 42283, failed: 0 00:31:34.824 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 42283, failed to submit 0 00:31:34.824 success 0, unsuccess 42283, failed 0 00:31:34.824 20:11:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:34.824 20:11:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:34.824 EAL: No free 2048 kB hugepages reported on node 1 00:31:38.127 Initializing NVMe Controllers 00:31:38.127 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:38.127 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:38.127 Initialization complete. Launching workers. 00:31:38.127 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 82348, failed: 0 00:31:38.127 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20750, failed to submit 61598 00:31:38.127 success 0, unsuccess 20750, failed 0 00:31:38.127 20:11:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:38.127 20:11:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:38.127 EAL: No free 2048 kB hugepages reported on node 1 00:31:40.673 Initializing NVMe Controllers 00:31:40.673 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:40.673 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:40.673 Initialization complete. Launching workers. 00:31:40.673 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 79850, failed: 0 00:31:40.673 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19922, failed to submit 59928 00:31:40.673 success 0, unsuccess 19922, failed 0 00:31:40.673 20:11:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:31:40.673 20:11:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:40.673 20:11:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:31:40.673 20:11:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:40.673 20:11:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:40.673 20:11:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:40.673 20:11:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:40.673 20:11:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:40.673 20:11:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:40.673 20:11:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:43.978 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:43.978 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:43.978 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:43.978 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:43.978 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:43.978 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:43.978 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:43.978 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:43.978 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:43.978 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:43.978 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:43.978 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:43.978 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:43.978 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:43.978 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:43.978 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:45.910 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:31:46.172 00:31:46.172 real 0m19.473s 00:31:46.172 user 0m7.298s 00:31:46.172 sys 0m6.197s 00:31:46.172 20:11:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:46.172 20:11:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:46.172 ************************************ 00:31:46.172 END TEST kernel_target_abort 00:31:46.172 ************************************ 00:31:46.172 20:11:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:46.172 20:11:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:31:46.172 20:11:33 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:46.172 20:11:33 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:31:46.172 20:11:33 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:46.172 20:11:33 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:31:46.172 20:11:33 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:46.172 20:11:33 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:46.172 rmmod nvme_tcp 00:31:46.172 rmmod nvme_fabrics 00:31:46.172 rmmod nvme_keyring 00:31:46.172 20:11:33 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:46.172 20:11:34 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:31:46.172 20:11:34 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:31:46.172 20:11:34 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3897924 ']' 00:31:46.172 20:11:34 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3897924 00:31:46.172 20:11:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 3897924 ']' 00:31:46.172 20:11:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 3897924 00:31:46.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3897924) - No such process 00:31:46.172 20:11:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 3897924 is not found' 00:31:46.172 Process with pid 3897924 is not found 00:31:46.172 20:11:34 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:46.172 20:11:34 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:49.475 Waiting for block devices as requested 00:31:49.475 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:49.475 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:49.736 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:49.736 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:49.736 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:49.736 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:49.997 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:49.997 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:49.997 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:50.258 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:50.258 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:50.519 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:50.519 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:50.519 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:50.780 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:50.780 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:50.780 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:51.041 20:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:51.041 20:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:51.041 20:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:51.041 20:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:51.041 20:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:51.041 20:11:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:51.041 20:11:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:53.587 20:11:40 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:53.587 00:31:53.587 real 0m50.778s 00:31:53.587 user 1m1.002s 00:31:53.587 sys 0m18.878s 00:31:53.587 20:11:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:53.587 20:11:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:53.587 ************************************ 00:31:53.587 END TEST nvmf_abort_qd_sizes 00:31:53.587 ************************************ 00:31:53.587 20:11:41 -- spdk/autotest.sh@299 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:31:53.587 20:11:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:53.587 20:11:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:53.587 20:11:41 -- common/autotest_common.sh@10 -- # set +x 00:31:53.588 ************************************ 00:31:53.588 START TEST keyring_file 00:31:53.588 ************************************ 00:31:53.588 20:11:41 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:31:53.588 * Looking for test storage... 00:31:53.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:31:53.588 20:11:41 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:31:53.588 20:11:41 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:53.588 20:11:41 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:31:53.588 20:11:41 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:53.588 20:11:41 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:53.588 20:11:41 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:53.588 20:11:41 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:53.588 20:11:41 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:53.588 20:11:41 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:53.588 20:11:41 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:53.588 20:11:41 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:53.588 20:11:41 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:53.588 20:11:41 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:53.588 20:11:41 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:53.588 20:11:41 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:53.588 20:11:41 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:53.588 20:11:41 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:53.588 20:11:41 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:53.588 20:11:41 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:53.588 20:11:41 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:53.588 20:11:41 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:53.588 20:11:41 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:53.588 20:11:41 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:53.588 20:11:41 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.588 20:11:41 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.588 20:11:41 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.588 20:11:41 keyring_file -- paths/export.sh@5 -- # export PATH 00:31:53.588 20:11:41 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.588 20:11:41 keyring_file -- nvmf/common.sh@47 -- # : 0 00:31:53.588 20:11:41 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:53.588 20:11:41 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:53.588 20:11:41 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:53.588 20:11:41 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:53.588 20:11:41 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:53.588 20:11:41 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:53.588 20:11:41 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:53.588 20:11:41 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:53.588 20:11:41 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:53.588 20:11:41 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:53.588 20:11:41 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:53.588 20:11:41 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:31:53.588 20:11:41 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:31:53.588 20:11:41 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:31:53.588 20:11:41 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:53.588 20:11:41 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:53.588 20:11:41 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:53.588 20:11:41 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:53.588 20:11:41 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:53.588 20:11:41 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:53.588 20:11:41 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.rbWQ1YZHfr 00:31:53.588 20:11:41 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:53.588 20:11:41 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:53.588 20:11:41 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:53.588 20:11:41 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:53.588 20:11:41 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:53.588 20:11:41 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:53.588 20:11:41 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:53.588 20:11:41 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.rbWQ1YZHfr 00:31:53.588 20:11:41 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.rbWQ1YZHfr 00:31:53.588 20:11:41 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.rbWQ1YZHfr 00:31:53.588 20:11:41 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:31:53.588 20:11:41 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:53.588 20:11:41 keyring_file -- keyring/common.sh@17 -- # name=key1 00:31:53.588 20:11:41 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:53.588 20:11:41 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:53.588 20:11:41 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:53.588 20:11:41 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.rnRkyDDOAd 00:31:53.588 20:11:41 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:53.588 20:11:41 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:53.588 20:11:41 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:53.588 20:11:41 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:53.588 20:11:41 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:31:53.588 20:11:41 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:53.588 20:11:41 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:53.588 20:11:41 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.rnRkyDDOAd 00:31:53.588 20:11:41 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.rnRkyDDOAd 00:31:53.588 20:11:41 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.rnRkyDDOAd 00:31:53.588 20:11:41 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:31:53.588 20:11:41 keyring_file -- keyring/file.sh@30 -- # tgtpid=3908096 00:31:53.588 20:11:41 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3908096 00:31:53.588 20:11:41 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3908096 ']' 00:31:53.588 20:11:41 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:53.588 20:11:41 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:53.588 20:11:41 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:53.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:53.588 20:11:41 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:53.588 20:11:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:53.588 [2024-07-24 20:11:41.324273] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:31:53.588 [2024-07-24 20:11:41.324329] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3908096 ] 00:31:53.588 EAL: No free 2048 kB hugepages reported on node 1 00:31:53.588 [2024-07-24 20:11:41.378547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:53.588 [2024-07-24 20:11:41.445774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:54.160 20:11:42 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:54.160 20:11:42 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:31:54.160 20:11:42 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:31:54.160 20:11:42 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.160 20:11:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:54.421 [2024-07-24 20:11:42.114174] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:54.421 null0 00:31:54.421 [2024-07-24 20:11:42.146228] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:54.421 [2024-07-24 20:11:42.146463] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:54.421 [2024-07-24 20:11:42.154233] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:54.421 20:11:42 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.421 20:11:42 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:54.421 20:11:42 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:31:54.421 20:11:42 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:54.421 20:11:42 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:54.421 20:11:42 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:54.421 20:11:42 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:54.421 20:11:42 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:54.421 20:11:42 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:54.421 20:11:42 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.421 20:11:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:54.421 [2024-07-24 20:11:42.170275] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:31:54.421 request: 00:31:54.421 { 00:31:54.421 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:31:54.421 "secure_channel": false, 00:31:54.421 "listen_address": { 00:31:54.421 "trtype": "tcp", 00:31:54.421 "traddr": "127.0.0.1", 00:31:54.421 "trsvcid": "4420" 00:31:54.421 }, 00:31:54.421 "method": "nvmf_subsystem_add_listener", 00:31:54.421 "req_id": 1 00:31:54.421 } 00:31:54.421 Got JSON-RPC error response 00:31:54.421 response: 00:31:54.421 { 00:31:54.421 "code": -32602, 00:31:54.421 "message": "Invalid parameters" 00:31:54.421 } 00:31:54.421 20:11:42 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:54.421 20:11:42 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:31:54.421 20:11:42 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:54.421 20:11:42 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:54.421 20:11:42 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:54.421 20:11:42 keyring_file -- keyring/file.sh@46 -- # bperfpid=3908239 00:31:54.421 20:11:42 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3908239 /var/tmp/bperf.sock 00:31:54.421 20:11:42 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3908239 ']' 00:31:54.421 20:11:42 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:31:54.421 20:11:42 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:54.421 20:11:42 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:54.421 20:11:42 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:54.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:54.421 20:11:42 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:54.421 20:11:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:54.421 [2024-07-24 20:11:42.231341] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:31:54.421 [2024-07-24 20:11:42.231389] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3908239 ] 00:31:54.421 EAL: No free 2048 kB hugepages reported on node 1 00:31:54.421 [2024-07-24 20:11:42.307321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:54.421 [2024-07-24 20:11:42.371219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:55.364 20:11:42 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:55.364 20:11:42 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:31:55.364 20:11:42 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rbWQ1YZHfr 00:31:55.364 20:11:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rbWQ1YZHfr 00:31:55.364 20:11:43 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.rnRkyDDOAd 00:31:55.364 20:11:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.rnRkyDDOAd 00:31:55.364 20:11:43 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:31:55.364 20:11:43 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:31:55.364 20:11:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:55.364 20:11:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:55.364 20:11:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:55.625 20:11:43 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.rbWQ1YZHfr == \/\t\m\p\/\t\m\p\.\r\b\W\Q\1\Y\Z\H\f\r ]] 00:31:55.625 20:11:43 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:31:55.625 20:11:43 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:31:55.625 20:11:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:55.625 20:11:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:55.625 20:11:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:55.886 20:11:43 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.rnRkyDDOAd == \/\t\m\p\/\t\m\p\.\r\n\R\k\y\D\D\O\A\d ]] 00:31:55.886 20:11:43 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:31:55.886 20:11:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:55.886 20:11:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:55.886 20:11:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:55.886 20:11:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:55.886 20:11:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:55.886 20:11:43 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:31:55.886 20:11:43 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:31:55.886 20:11:43 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:55.886 20:11:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:55.886 20:11:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:55.886 20:11:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:55.886 20:11:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:56.147 20:11:43 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:31:56.147 20:11:43 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:56.147 20:11:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:56.147 [2024-07-24 20:11:44.055468] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:56.407 nvme0n1 00:31:56.407 20:11:44 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:31:56.408 20:11:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:56.408 20:11:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:56.408 20:11:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:56.408 20:11:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:56.408 20:11:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:56.408 20:11:44 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:31:56.408 20:11:44 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:31:56.408 20:11:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:56.408 20:11:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:56.408 20:11:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:56.408 20:11:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:56.408 20:11:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:56.668 20:11:44 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:31:56.668 20:11:44 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:56.668 Running I/O for 1 seconds... 00:31:58.053 00:31:58.053 Latency(us) 00:31:58.053 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:58.053 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:31:58.053 nvme0n1 : 1.02 6653.66 25.99 0.00 0.00 19040.61 9721.17 29928.11 00:31:58.053 =================================================================================================================== 00:31:58.053 Total : 6653.66 25.99 0.00 0.00 19040.61 9721.17 29928.11 00:31:58.053 0 00:31:58.053 20:11:45 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:58.053 20:11:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:58.053 20:11:45 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:31:58.053 20:11:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:58.053 20:11:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:58.053 20:11:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:58.053 20:11:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:58.053 20:11:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:58.053 20:11:45 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:31:58.053 20:11:45 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:31:58.053 20:11:45 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:58.053 20:11:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:58.053 20:11:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:58.053 20:11:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:58.053 20:11:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:58.315 20:11:46 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:31:58.315 20:11:46 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:58.315 20:11:46 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:31:58.315 20:11:46 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:58.315 20:11:46 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:31:58.315 20:11:46 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:58.315 20:11:46 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:31:58.315 20:11:46 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:58.315 20:11:46 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:58.315 20:11:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:58.315 [2024-07-24 20:11:46.220151] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:58.315 [2024-07-24 20:11:46.220418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcaa170 (107): Transport endpoint is not connected 00:31:58.315 [2024-07-24 20:11:46.221414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcaa170 (9): Bad file descriptor 00:31:58.315 [2024-07-24 20:11:46.222415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:58.315 [2024-07-24 20:11:46.222423] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:58.315 [2024-07-24 20:11:46.222428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:58.315 request: 00:31:58.315 { 00:31:58.315 "name": "nvme0", 00:31:58.315 "trtype": "tcp", 00:31:58.315 "traddr": "127.0.0.1", 00:31:58.315 "adrfam": "ipv4", 00:31:58.315 "trsvcid": "4420", 00:31:58.315 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:58.315 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:58.315 "prchk_reftag": false, 00:31:58.315 "prchk_guard": false, 00:31:58.315 "hdgst": false, 00:31:58.315 "ddgst": false, 00:31:58.315 "psk": "key1", 00:31:58.315 "method": "bdev_nvme_attach_controller", 00:31:58.315 "req_id": 1 00:31:58.315 } 00:31:58.315 Got JSON-RPC error response 00:31:58.315 response: 00:31:58.315 { 00:31:58.315 "code": -5, 00:31:58.315 "message": "Input/output error" 00:31:58.315 } 00:31:58.315 20:11:46 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:31:58.315 20:11:46 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:58.315 20:11:46 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:58.315 20:11:46 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:58.315 20:11:46 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:31:58.315 20:11:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:58.315 20:11:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:58.315 20:11:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:58.315 20:11:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:58.315 20:11:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:58.576 20:11:46 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:31:58.576 20:11:46 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:31:58.576 20:11:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:58.576 20:11:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:58.576 20:11:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:58.576 20:11:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:58.576 20:11:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:58.837 20:11:46 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:31:58.837 20:11:46 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:31:58.837 20:11:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:58.837 20:11:46 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:31:58.837 20:11:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:31:59.097 20:11:46 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:31:59.098 20:11:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:59.098 20:11:46 keyring_file -- keyring/file.sh@77 -- # jq length 00:31:59.098 20:11:46 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:31:59.098 20:11:46 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.rbWQ1YZHfr 00:31:59.098 20:11:46 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.rbWQ1YZHfr 00:31:59.098 20:11:46 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:31:59.098 20:11:46 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.rbWQ1YZHfr 00:31:59.098 20:11:46 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:31:59.098 20:11:46 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:59.098 20:11:46 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:31:59.098 20:11:46 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:59.098 20:11:46 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rbWQ1YZHfr 00:31:59.098 20:11:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rbWQ1YZHfr 00:31:59.358 [2024-07-24 20:11:47.138044] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.rbWQ1YZHfr': 0100660 00:31:59.358 [2024-07-24 20:11:47.138061] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:31:59.358 request: 00:31:59.358 { 00:31:59.358 "name": "key0", 00:31:59.358 "path": "/tmp/tmp.rbWQ1YZHfr", 00:31:59.358 "method": "keyring_file_add_key", 00:31:59.358 "req_id": 1 00:31:59.358 } 00:31:59.358 Got JSON-RPC error response 00:31:59.358 response: 00:31:59.358 { 00:31:59.358 "code": -1, 00:31:59.358 "message": "Operation not permitted" 00:31:59.358 } 00:31:59.358 20:11:47 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:31:59.358 20:11:47 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:59.358 20:11:47 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:59.358 20:11:47 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:59.358 20:11:47 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.rbWQ1YZHfr 00:31:59.358 20:11:47 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rbWQ1YZHfr 00:31:59.358 20:11:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rbWQ1YZHfr 00:31:59.358 20:11:47 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.rbWQ1YZHfr 00:31:59.619 20:11:47 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:31:59.619 20:11:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:59.619 20:11:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:59.619 20:11:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:59.619 20:11:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:59.619 20:11:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:59.619 20:11:47 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:31:59.619 20:11:47 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:59.619 20:11:47 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:31:59.619 20:11:47 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:59.619 20:11:47 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:31:59.619 20:11:47 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:59.619 20:11:47 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:31:59.619 20:11:47 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:59.619 20:11:47 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:59.619 20:11:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:59.879 [2024-07-24 20:11:47.623279] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.rbWQ1YZHfr': No such file or directory 00:31:59.879 [2024-07-24 20:11:47.623292] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:31:59.879 [2024-07-24 20:11:47.623308] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:31:59.879 [2024-07-24 20:11:47.623313] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:59.879 [2024-07-24 20:11:47.623318] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:31:59.879 request: 00:31:59.879 { 00:31:59.879 "name": "nvme0", 00:31:59.879 "trtype": "tcp", 00:31:59.879 "traddr": "127.0.0.1", 00:31:59.879 "adrfam": "ipv4", 00:31:59.879 "trsvcid": "4420", 00:31:59.879 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:59.879 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:59.879 "prchk_reftag": false, 00:31:59.879 "prchk_guard": false, 00:31:59.879 "hdgst": false, 00:31:59.879 "ddgst": false, 00:31:59.879 "psk": "key0", 00:31:59.879 "method": "bdev_nvme_attach_controller", 00:31:59.879 "req_id": 1 00:31:59.879 } 00:31:59.879 Got JSON-RPC error response 00:31:59.879 response: 00:31:59.879 { 00:31:59.879 "code": -19, 00:31:59.879 "message": "No such device" 00:31:59.879 } 00:31:59.879 20:11:47 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:31:59.879 20:11:47 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:59.879 20:11:47 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:59.879 20:11:47 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:59.879 20:11:47 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:31:59.879 20:11:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:59.879 20:11:47 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:59.879 20:11:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:59.879 20:11:47 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:59.879 20:11:47 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:59.879 20:11:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:59.879 20:11:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:59.879 20:11:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.YyspkVwZ9X 00:31:59.879 20:11:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:59.879 20:11:47 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:59.879 20:11:47 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:59.879 20:11:47 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:59.879 20:11:47 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:59.879 20:11:47 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:59.879 20:11:47 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:00.140 20:11:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.YyspkVwZ9X 00:32:00.140 20:11:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.YyspkVwZ9X 00:32:00.140 20:11:47 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.YyspkVwZ9X 00:32:00.140 20:11:47 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YyspkVwZ9X 00:32:00.140 20:11:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YyspkVwZ9X 00:32:00.140 20:11:47 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:00.140 20:11:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:00.401 nvme0n1 00:32:00.401 20:11:48 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:32:00.401 20:11:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:00.401 20:11:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:00.401 20:11:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:00.401 20:11:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:00.401 20:11:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:00.662 20:11:48 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:32:00.662 20:11:48 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:32:00.662 20:11:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:00.662 20:11:48 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:32:00.662 20:11:48 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:32:00.662 20:11:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:00.662 20:11:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:00.662 20:11:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:00.923 20:11:48 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:32:00.923 20:11:48 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:32:00.923 20:11:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:00.923 20:11:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:00.923 20:11:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:00.923 20:11:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:00.923 20:11:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:00.923 20:11:48 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:32:00.923 20:11:48 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:00.923 20:11:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:01.183 20:11:49 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:32:01.183 20:11:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:01.183 20:11:49 keyring_file -- keyring/file.sh@104 -- # jq length 00:32:01.443 20:11:49 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:32:01.443 20:11:49 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YyspkVwZ9X 00:32:01.443 20:11:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YyspkVwZ9X 00:32:01.443 20:11:49 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.rnRkyDDOAd 00:32:01.443 20:11:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.rnRkyDDOAd 00:32:01.704 20:11:49 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:01.704 20:11:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:01.704 nvme0n1 00:32:01.965 20:11:49 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:32:01.965 20:11:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:01.965 20:11:49 keyring_file -- keyring/file.sh@112 -- # config='{ 00:32:01.965 "subsystems": [ 00:32:01.965 { 00:32:01.965 "subsystem": "keyring", 00:32:01.965 "config": [ 00:32:01.965 { 00:32:01.965 "method": "keyring_file_add_key", 00:32:01.965 "params": { 00:32:01.965 "name": "key0", 00:32:01.965 "path": "/tmp/tmp.YyspkVwZ9X" 00:32:01.965 } 00:32:01.965 }, 00:32:01.965 { 00:32:01.965 "method": "keyring_file_add_key", 00:32:01.965 "params": { 00:32:01.965 "name": "key1", 00:32:01.965 "path": "/tmp/tmp.rnRkyDDOAd" 00:32:01.965 } 00:32:01.965 } 00:32:01.965 ] 00:32:01.965 }, 00:32:01.965 { 00:32:01.965 "subsystem": "iobuf", 00:32:01.965 "config": [ 00:32:01.965 { 00:32:01.965 "method": "iobuf_set_options", 00:32:01.965 "params": { 00:32:01.965 "small_pool_count": 8192, 00:32:01.965 "large_pool_count": 1024, 00:32:01.965 "small_bufsize": 8192, 00:32:01.965 "large_bufsize": 135168 00:32:01.966 } 00:32:01.966 } 00:32:01.966 ] 00:32:01.966 }, 00:32:01.966 { 00:32:01.966 "subsystem": "sock", 00:32:01.966 "config": [ 00:32:01.966 { 00:32:01.966 "method": "sock_set_default_impl", 00:32:01.966 "params": { 00:32:01.966 "impl_name": "posix" 00:32:01.966 } 00:32:01.966 }, 00:32:01.966 { 00:32:01.966 "method": "sock_impl_set_options", 00:32:01.966 "params": { 00:32:01.966 "impl_name": "ssl", 00:32:01.966 "recv_buf_size": 4096, 00:32:01.966 "send_buf_size": 4096, 00:32:01.966 "enable_recv_pipe": true, 00:32:01.966 "enable_quickack": false, 00:32:01.966 "enable_placement_id": 0, 00:32:01.966 "enable_zerocopy_send_server": true, 00:32:01.966 "enable_zerocopy_send_client": false, 00:32:01.966 "zerocopy_threshold": 0, 00:32:01.966 "tls_version": 0, 00:32:01.966 "enable_ktls": false 00:32:01.966 } 00:32:01.966 }, 00:32:01.966 { 00:32:01.966 "method": "sock_impl_set_options", 00:32:01.966 "params": { 00:32:01.966 "impl_name": "posix", 00:32:01.966 "recv_buf_size": 2097152, 00:32:01.966 "send_buf_size": 2097152, 00:32:01.966 "enable_recv_pipe": true, 00:32:01.966 "enable_quickack": false, 00:32:01.966 "enable_placement_id": 0, 00:32:01.966 "enable_zerocopy_send_server": true, 00:32:01.966 "enable_zerocopy_send_client": false, 00:32:01.966 "zerocopy_threshold": 0, 00:32:01.966 "tls_version": 0, 00:32:01.966 "enable_ktls": false 00:32:01.966 } 00:32:01.966 } 00:32:01.966 ] 00:32:01.966 }, 00:32:01.966 { 00:32:01.966 "subsystem": "vmd", 00:32:01.966 "config": [] 00:32:01.966 }, 00:32:01.966 { 00:32:01.966 "subsystem": "accel", 00:32:01.966 "config": [ 00:32:01.966 { 00:32:01.966 "method": "accel_set_options", 00:32:01.966 "params": { 00:32:01.966 "small_cache_size": 128, 00:32:01.966 "large_cache_size": 16, 00:32:01.966 "task_count": 2048, 00:32:01.966 "sequence_count": 2048, 00:32:01.966 "buf_count": 2048 00:32:01.966 } 00:32:01.966 } 00:32:01.966 ] 00:32:01.966 }, 00:32:01.966 { 00:32:01.966 "subsystem": "bdev", 00:32:01.966 "config": [ 00:32:01.966 { 00:32:01.966 "method": "bdev_set_options", 00:32:01.966 "params": { 00:32:01.966 "bdev_io_pool_size": 65535, 00:32:01.966 "bdev_io_cache_size": 256, 00:32:01.966 "bdev_auto_examine": true, 00:32:01.966 "iobuf_small_cache_size": 128, 00:32:01.966 "iobuf_large_cache_size": 16 00:32:01.966 } 00:32:01.966 }, 00:32:01.966 { 00:32:01.966 "method": "bdev_raid_set_options", 00:32:01.966 "params": { 00:32:01.966 "process_window_size_kb": 1024, 00:32:01.966 "process_max_bandwidth_mb_sec": 0 00:32:01.966 } 00:32:01.966 }, 00:32:01.966 { 00:32:01.966 "method": "bdev_iscsi_set_options", 00:32:01.966 "params": { 00:32:01.966 "timeout_sec": 30 00:32:01.966 } 00:32:01.966 }, 00:32:01.966 { 00:32:01.966 "method": "bdev_nvme_set_options", 00:32:01.966 "params": { 00:32:01.966 "action_on_timeout": "none", 00:32:01.966 "timeout_us": 0, 00:32:01.966 "timeout_admin_us": 0, 00:32:01.966 "keep_alive_timeout_ms": 10000, 00:32:01.966 "arbitration_burst": 0, 00:32:01.966 "low_priority_weight": 0, 00:32:01.966 "medium_priority_weight": 0, 00:32:01.966 "high_priority_weight": 0, 00:32:01.966 "nvme_adminq_poll_period_us": 10000, 00:32:01.966 "nvme_ioq_poll_period_us": 0, 00:32:01.966 "io_queue_requests": 512, 00:32:01.966 "delay_cmd_submit": true, 00:32:01.966 "transport_retry_count": 4, 00:32:01.966 "bdev_retry_count": 3, 00:32:01.966 "transport_ack_timeout": 0, 00:32:01.966 "ctrlr_loss_timeout_sec": 0, 00:32:01.966 "reconnect_delay_sec": 0, 00:32:01.966 "fast_io_fail_timeout_sec": 0, 00:32:01.966 "disable_auto_failback": false, 00:32:01.966 "generate_uuids": false, 00:32:01.966 "transport_tos": 0, 00:32:01.966 "nvme_error_stat": false, 00:32:01.966 "rdma_srq_size": 0, 00:32:01.966 "io_path_stat": false, 00:32:01.966 "allow_accel_sequence": false, 00:32:01.966 "rdma_max_cq_size": 0, 00:32:01.966 "rdma_cm_event_timeout_ms": 0, 00:32:01.966 "dhchap_digests": [ 00:32:01.966 "sha256", 00:32:01.966 "sha384", 00:32:01.966 "sha512" 00:32:01.966 ], 00:32:01.966 "dhchap_dhgroups": [ 00:32:01.966 "null", 00:32:01.966 "ffdhe2048", 00:32:01.966 "ffdhe3072", 00:32:01.966 "ffdhe4096", 00:32:01.966 "ffdhe6144", 00:32:01.966 "ffdhe8192" 00:32:01.966 ] 00:32:01.966 } 00:32:01.966 }, 00:32:01.966 { 00:32:01.966 "method": "bdev_nvme_attach_controller", 00:32:01.966 "params": { 00:32:01.966 "name": "nvme0", 00:32:01.966 "trtype": "TCP", 00:32:01.966 "adrfam": "IPv4", 00:32:01.966 "traddr": "127.0.0.1", 00:32:01.966 "trsvcid": "4420", 00:32:01.966 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:01.966 "prchk_reftag": false, 00:32:01.966 "prchk_guard": false, 00:32:01.966 "ctrlr_loss_timeout_sec": 0, 00:32:01.966 "reconnect_delay_sec": 0, 00:32:01.966 "fast_io_fail_timeout_sec": 0, 00:32:01.966 "psk": "key0", 00:32:01.966 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:01.966 "hdgst": false, 00:32:01.966 "ddgst": false 00:32:01.966 } 00:32:01.966 }, 00:32:01.966 { 00:32:01.966 "method": "bdev_nvme_set_hotplug", 00:32:01.966 "params": { 00:32:01.966 "period_us": 100000, 00:32:01.966 "enable": false 00:32:01.966 } 00:32:01.966 }, 00:32:01.966 { 00:32:01.966 "method": "bdev_wait_for_examine" 00:32:01.966 } 00:32:01.966 ] 00:32:01.966 }, 00:32:01.966 { 00:32:01.966 "subsystem": "nbd", 00:32:01.966 "config": [] 00:32:01.966 } 00:32:01.966 ] 00:32:01.966 }' 00:32:01.966 20:11:49 keyring_file -- keyring/file.sh@114 -- # killprocess 3908239 00:32:01.966 20:11:49 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3908239 ']' 00:32:01.966 20:11:49 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3908239 00:32:01.966 20:11:49 keyring_file -- common/autotest_common.sh@955 -- # uname 00:32:01.966 20:11:49 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:01.966 20:11:49 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3908239 00:32:02.228 20:11:49 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:02.228 20:11:49 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:02.228 20:11:49 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3908239' 00:32:02.228 killing process with pid 3908239 00:32:02.228 20:11:49 keyring_file -- common/autotest_common.sh@969 -- # kill 3908239 00:32:02.228 Received shutdown signal, test time was about 1.000000 seconds 00:32:02.228 00:32:02.228 Latency(us) 00:32:02.228 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:02.228 =================================================================================================================== 00:32:02.228 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:02.228 20:11:49 keyring_file -- common/autotest_common.sh@974 -- # wait 3908239 00:32:02.228 20:11:50 keyring_file -- keyring/file.sh@117 -- # bperfpid=3909857 00:32:02.228 20:11:50 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3909857 /var/tmp/bperf.sock 00:32:02.228 20:11:50 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3909857 ']' 00:32:02.228 20:11:50 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:02.228 20:11:50 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:02.228 20:11:50 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:02.228 20:11:50 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:02.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:02.228 20:11:50 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:02.228 20:11:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:02.228 20:11:50 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:32:02.228 "subsystems": [ 00:32:02.228 { 00:32:02.228 "subsystem": "keyring", 00:32:02.228 "config": [ 00:32:02.228 { 00:32:02.228 "method": "keyring_file_add_key", 00:32:02.228 "params": { 00:32:02.228 "name": "key0", 00:32:02.228 "path": "/tmp/tmp.YyspkVwZ9X" 00:32:02.228 } 00:32:02.228 }, 00:32:02.228 { 00:32:02.228 "method": "keyring_file_add_key", 00:32:02.228 "params": { 00:32:02.228 "name": "key1", 00:32:02.228 "path": "/tmp/tmp.rnRkyDDOAd" 00:32:02.228 } 00:32:02.228 } 00:32:02.228 ] 00:32:02.228 }, 00:32:02.228 { 00:32:02.228 "subsystem": "iobuf", 00:32:02.228 "config": [ 00:32:02.228 { 00:32:02.228 "method": "iobuf_set_options", 00:32:02.228 "params": { 00:32:02.228 "small_pool_count": 8192, 00:32:02.228 "large_pool_count": 1024, 00:32:02.228 "small_bufsize": 8192, 00:32:02.228 "large_bufsize": 135168 00:32:02.228 } 00:32:02.228 } 00:32:02.228 ] 00:32:02.228 }, 00:32:02.228 { 00:32:02.228 "subsystem": "sock", 00:32:02.228 "config": [ 00:32:02.228 { 00:32:02.228 "method": "sock_set_default_impl", 00:32:02.228 "params": { 00:32:02.228 "impl_name": "posix" 00:32:02.228 } 00:32:02.228 }, 00:32:02.228 { 00:32:02.228 "method": "sock_impl_set_options", 00:32:02.228 "params": { 00:32:02.228 "impl_name": "ssl", 00:32:02.228 "recv_buf_size": 4096, 00:32:02.228 "send_buf_size": 4096, 00:32:02.228 "enable_recv_pipe": true, 00:32:02.228 "enable_quickack": false, 00:32:02.228 "enable_placement_id": 0, 00:32:02.228 "enable_zerocopy_send_server": true, 00:32:02.228 "enable_zerocopy_send_client": false, 00:32:02.228 "zerocopy_threshold": 0, 00:32:02.228 "tls_version": 0, 00:32:02.228 "enable_ktls": false 00:32:02.228 } 00:32:02.228 }, 00:32:02.228 { 00:32:02.228 "method": "sock_impl_set_options", 00:32:02.228 "params": { 00:32:02.228 "impl_name": "posix", 00:32:02.228 "recv_buf_size": 2097152, 00:32:02.228 "send_buf_size": 2097152, 00:32:02.228 "enable_recv_pipe": true, 00:32:02.228 "enable_quickack": false, 00:32:02.228 "enable_placement_id": 0, 00:32:02.228 "enable_zerocopy_send_server": true, 00:32:02.228 "enable_zerocopy_send_client": false, 00:32:02.228 "zerocopy_threshold": 0, 00:32:02.228 "tls_version": 0, 00:32:02.228 "enable_ktls": false 00:32:02.228 } 00:32:02.228 } 00:32:02.228 ] 00:32:02.228 }, 00:32:02.228 { 00:32:02.228 "subsystem": "vmd", 00:32:02.228 "config": [] 00:32:02.228 }, 00:32:02.228 { 00:32:02.228 "subsystem": "accel", 00:32:02.228 "config": [ 00:32:02.228 { 00:32:02.228 "method": "accel_set_options", 00:32:02.228 "params": { 00:32:02.228 "small_cache_size": 128, 00:32:02.228 "large_cache_size": 16, 00:32:02.228 "task_count": 2048, 00:32:02.228 "sequence_count": 2048, 00:32:02.228 "buf_count": 2048 00:32:02.228 } 00:32:02.228 } 00:32:02.228 ] 00:32:02.228 }, 00:32:02.228 { 00:32:02.228 "subsystem": "bdev", 00:32:02.228 "config": [ 00:32:02.228 { 00:32:02.228 "method": "bdev_set_options", 00:32:02.228 "params": { 00:32:02.228 "bdev_io_pool_size": 65535, 00:32:02.228 "bdev_io_cache_size": 256, 00:32:02.228 "bdev_auto_examine": true, 00:32:02.228 "iobuf_small_cache_size": 128, 00:32:02.228 "iobuf_large_cache_size": 16 00:32:02.228 } 00:32:02.228 }, 00:32:02.228 { 00:32:02.228 "method": "bdev_raid_set_options", 00:32:02.228 "params": { 00:32:02.228 "process_window_size_kb": 1024, 00:32:02.228 "process_max_bandwidth_mb_sec": 0 00:32:02.228 } 00:32:02.228 }, 00:32:02.229 { 00:32:02.229 "method": "bdev_iscsi_set_options", 00:32:02.229 "params": { 00:32:02.229 "timeout_sec": 30 00:32:02.229 } 00:32:02.229 }, 00:32:02.229 { 00:32:02.229 "method": "bdev_nvme_set_options", 00:32:02.229 "params": { 00:32:02.229 "action_on_timeout": "none", 00:32:02.229 "timeout_us": 0, 00:32:02.229 "timeout_admin_us": 0, 00:32:02.229 "keep_alive_timeout_ms": 10000, 00:32:02.229 "arbitration_burst": 0, 00:32:02.229 "low_priority_weight": 0, 00:32:02.229 "medium_priority_weight": 0, 00:32:02.229 "high_priority_weight": 0, 00:32:02.229 "nvme_adminq_poll_period_us": 10000, 00:32:02.229 "nvme_ioq_poll_period_us": 0, 00:32:02.229 "io_queue_requests": 512, 00:32:02.229 "delay_cmd_submit": true, 00:32:02.229 "transport_retry_count": 4, 00:32:02.229 "bdev_retry_count": 3, 00:32:02.229 "transport_ack_timeout": 0, 00:32:02.229 "ctrlr_loss_timeout_sec": 0, 00:32:02.229 "reconnect_delay_sec": 0, 00:32:02.229 "fast_io_fail_timeout_sec": 0, 00:32:02.229 "disable_auto_failback": false, 00:32:02.229 "generate_uuids": false, 00:32:02.229 "transport_tos": 0, 00:32:02.229 "nvme_error_stat": false, 00:32:02.229 "rdma_srq_size": 0, 00:32:02.229 "io_path_stat": false, 00:32:02.229 "allow_accel_sequence": false, 00:32:02.229 "rdma_max_cq_size": 0, 00:32:02.229 "rdma_cm_event_timeout_ms": 0, 00:32:02.229 "dhchap_digests": [ 00:32:02.229 "sha256", 00:32:02.229 "sha384", 00:32:02.229 "sha512" 00:32:02.229 ], 00:32:02.229 "dhchap_dhgroups": [ 00:32:02.229 "null", 00:32:02.229 "ffdhe2048", 00:32:02.229 "ffdhe3072", 00:32:02.229 "ffdhe4096", 00:32:02.229 "ffdhe6144", 00:32:02.229 "ffdhe8192" 00:32:02.229 ] 00:32:02.229 } 00:32:02.229 }, 00:32:02.229 { 00:32:02.229 "method": "bdev_nvme_attach_controller", 00:32:02.229 "params": { 00:32:02.229 "name": "nvme0", 00:32:02.229 "trtype": "TCP", 00:32:02.229 "adrfam": "IPv4", 00:32:02.229 "traddr": "127.0.0.1", 00:32:02.229 "trsvcid": "4420", 00:32:02.229 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:02.229 "prchk_reftag": false, 00:32:02.229 "prchk_guard": false, 00:32:02.229 "ctrlr_loss_timeout_sec": 0, 00:32:02.229 "reconnect_delay_sec": 0, 00:32:02.229 "fast_io_fail_timeout_sec": 0, 00:32:02.229 "psk": "key0", 00:32:02.229 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:02.229 "hdgst": false, 00:32:02.229 "ddgst": false 00:32:02.229 } 00:32:02.229 }, 00:32:02.229 { 00:32:02.229 "method": "bdev_nvme_set_hotplug", 00:32:02.229 "params": { 00:32:02.229 "period_us": 100000, 00:32:02.229 "enable": false 00:32:02.229 } 00:32:02.229 }, 00:32:02.229 { 00:32:02.229 "method": "bdev_wait_for_examine" 00:32:02.229 } 00:32:02.229 ] 00:32:02.229 }, 00:32:02.229 { 00:32:02.229 "subsystem": "nbd", 00:32:02.229 "config": [] 00:32:02.229 } 00:32:02.229 ] 00:32:02.229 }' 00:32:02.229 [2024-07-24 20:11:50.119844] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:32:02.229 [2024-07-24 20:11:50.119903] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3909857 ] 00:32:02.229 EAL: No free 2048 kB hugepages reported on node 1 00:32:02.490 [2024-07-24 20:11:50.193341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:02.490 [2024-07-24 20:11:50.246911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:02.490 [2024-07-24 20:11:50.388280] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:03.059 20:11:50 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:03.059 20:11:50 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:32:03.059 20:11:50 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:32:03.059 20:11:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:03.059 20:11:50 keyring_file -- keyring/file.sh@120 -- # jq length 00:32:03.318 20:11:51 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:32:03.318 20:11:51 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:32:03.318 20:11:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:03.318 20:11:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:03.318 20:11:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:03.318 20:11:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:03.318 20:11:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:03.318 20:11:51 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:32:03.318 20:11:51 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:32:03.318 20:11:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:03.318 20:11:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:03.318 20:11:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:03.318 20:11:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:03.318 20:11:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:03.577 20:11:51 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:32:03.577 20:11:51 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:32:03.577 20:11:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:32:03.577 20:11:51 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:32:03.836 20:11:51 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:32:03.836 20:11:51 keyring_file -- keyring/file.sh@1 -- # cleanup 00:32:03.836 20:11:51 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.YyspkVwZ9X /tmp/tmp.rnRkyDDOAd 00:32:03.836 20:11:51 keyring_file -- keyring/file.sh@20 -- # killprocess 3909857 00:32:03.836 20:11:51 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3909857 ']' 00:32:03.836 20:11:51 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3909857 00:32:03.836 20:11:51 keyring_file -- common/autotest_common.sh@955 -- # uname 00:32:03.836 20:11:51 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:03.836 20:11:51 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3909857 00:32:03.836 20:11:51 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:03.836 20:11:51 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:03.836 20:11:51 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3909857' 00:32:03.836 killing process with pid 3909857 00:32:03.836 20:11:51 keyring_file -- common/autotest_common.sh@969 -- # kill 3909857 00:32:03.836 Received shutdown signal, test time was about 1.000000 seconds 00:32:03.836 00:32:03.836 Latency(us) 00:32:03.836 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:03.836 =================================================================================================================== 00:32:03.836 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:03.836 20:11:51 keyring_file -- common/autotest_common.sh@974 -- # wait 3909857 00:32:03.836 20:11:51 keyring_file -- keyring/file.sh@21 -- # killprocess 3908096 00:32:03.836 20:11:51 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3908096 ']' 00:32:03.836 20:11:51 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3908096 00:32:03.836 20:11:51 keyring_file -- common/autotest_common.sh@955 -- # uname 00:32:03.836 20:11:51 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:03.836 20:11:51 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3908096 00:32:03.836 20:11:51 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:03.836 20:11:51 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:03.836 20:11:51 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3908096' 00:32:03.836 killing process with pid 3908096 00:32:03.836 20:11:51 keyring_file -- common/autotest_common.sh@969 -- # kill 3908096 00:32:03.836 [2024-07-24 20:11:51.760438] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:03.836 20:11:51 keyring_file -- common/autotest_common.sh@974 -- # wait 3908096 00:32:04.095 00:32:04.095 real 0m10.921s 00:32:04.095 user 0m25.315s 00:32:04.095 sys 0m2.515s 00:32:04.095 20:11:51 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:04.095 20:11:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:04.095 ************************************ 00:32:04.095 END TEST keyring_file 00:32:04.095 ************************************ 00:32:04.095 20:11:52 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:32:04.095 20:11:52 -- spdk/autotest.sh@301 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:04.095 20:11:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:04.095 20:11:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:04.095 20:11:52 -- common/autotest_common.sh@10 -- # set +x 00:32:04.361 ************************************ 00:32:04.361 START TEST keyring_linux 00:32:04.361 ************************************ 00:32:04.361 20:11:52 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:04.361 * Looking for test storage... 00:32:04.361 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:04.361 20:11:52 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:04.361 20:11:52 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:04.361 20:11:52 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:32:04.362 20:11:52 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:04.362 20:11:52 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:04.362 20:11:52 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:04.362 20:11:52 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:04.362 20:11:52 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:04.362 20:11:52 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:04.362 20:11:52 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:04.362 20:11:52 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:04.362 20:11:52 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:04.362 20:11:52 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:04.362 20:11:52 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:04.362 20:11:52 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:04.362 20:11:52 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:04.362 20:11:52 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:04.362 20:11:52 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:04.362 20:11:52 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:04.362 20:11:52 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:04.362 20:11:52 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:04.362 20:11:52 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:04.362 20:11:52 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:04.362 20:11:52 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.362 20:11:52 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.362 20:11:52 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.362 20:11:52 keyring_linux -- paths/export.sh@5 -- # export PATH 00:32:04.362 20:11:52 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.362 20:11:52 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:32:04.362 20:11:52 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:04.362 20:11:52 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:04.362 20:11:52 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:04.362 20:11:52 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:04.362 20:11:52 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:04.362 20:11:52 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:04.362 20:11:52 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:04.362 20:11:52 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:04.362 20:11:52 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:04.362 20:11:52 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:04.362 20:11:52 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:04.362 20:11:52 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:32:04.362 20:11:52 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:32:04.362 20:11:52 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:32:04.362 20:11:52 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:32:04.362 20:11:52 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:04.362 20:11:52 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:32:04.362 20:11:52 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:04.362 20:11:52 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:04.362 20:11:52 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:32:04.362 20:11:52 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:04.362 20:11:52 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:04.362 20:11:52 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:04.362 20:11:52 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:04.362 20:11:52 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:04.362 20:11:52 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:04.362 20:11:52 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:04.362 20:11:52 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:32:04.362 20:11:52 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:32:04.362 /tmp/:spdk-test:key0 00:32:04.362 20:11:52 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:32:04.362 20:11:52 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:04.362 20:11:52 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:32:04.362 20:11:52 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:04.362 20:11:52 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:04.362 20:11:52 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:32:04.362 20:11:52 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:04.362 20:11:52 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:04.362 20:11:52 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:04.362 20:11:52 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:04.362 20:11:52 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:04.362 20:11:52 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:04.362 20:11:52 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:04.362 20:11:52 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:32:04.362 20:11:52 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:32:04.362 /tmp/:spdk-test:key1 00:32:04.362 20:11:52 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3910468 00:32:04.362 20:11:52 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3910468 00:32:04.362 20:11:52 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:04.362 20:11:52 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 3910468 ']' 00:32:04.362 20:11:52 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:04.362 20:11:52 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:04.362 20:11:52 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:04.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:04.362 20:11:52 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:04.362 20:11:52 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:04.683 [2024-07-24 20:11:52.328199] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:32:04.683 [2024-07-24 20:11:52.328290] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3910468 ] 00:32:04.683 EAL: No free 2048 kB hugepages reported on node 1 00:32:04.683 [2024-07-24 20:11:52.391112] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:04.683 [2024-07-24 20:11:52.465231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:05.254 20:11:53 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:05.254 20:11:53 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:32:05.254 20:11:53 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:32:05.254 20:11:53 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.254 20:11:53 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:05.254 [2024-07-24 20:11:53.089699] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:05.254 null0 00:32:05.254 [2024-07-24 20:11:53.121756] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:05.254 [2024-07-24 20:11:53.122136] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:05.254 20:11:53 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.254 20:11:53 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:32:05.254 498453097 00:32:05.254 20:11:53 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:32:05.254 716563978 00:32:05.254 20:11:53 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3910489 00:32:05.254 20:11:53 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3910489 /var/tmp/bperf.sock 00:32:05.254 20:11:53 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:32:05.254 20:11:53 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 3910489 ']' 00:32:05.254 20:11:53 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:05.254 20:11:53 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:05.254 20:11:53 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:05.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:05.254 20:11:53 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:05.254 20:11:53 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:05.254 [2024-07-24 20:11:53.204626] Starting SPDK v24.09-pre git sha1 19f5787c8 / DPDK 24.03.0 initialization... 00:32:05.254 [2024-07-24 20:11:53.204672] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3910489 ] 00:32:05.515 EAL: No free 2048 kB hugepages reported on node 1 00:32:05.515 [2024-07-24 20:11:53.279056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:05.515 [2024-07-24 20:11:53.332968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:06.086 20:11:53 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:06.086 20:11:53 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:32:06.086 20:11:53 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:32:06.086 20:11:53 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:32:06.346 20:11:54 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:32:06.346 20:11:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:06.607 20:11:54 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:06.607 20:11:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:06.607 [2024-07-24 20:11:54.443247] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:06.607 nvme0n1 00:32:06.607 20:11:54 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:32:06.607 20:11:54 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:32:06.607 20:11:54 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:06.607 20:11:54 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:06.607 20:11:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:06.607 20:11:54 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:06.867 20:11:54 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:32:06.867 20:11:54 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:06.867 20:11:54 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:32:06.867 20:11:54 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:32:06.867 20:11:54 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:06.867 20:11:54 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:32:06.867 20:11:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:07.128 20:11:54 keyring_linux -- keyring/linux.sh@25 -- # sn=498453097 00:32:07.128 20:11:54 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:32:07.128 20:11:54 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:07.128 20:11:54 keyring_linux -- keyring/linux.sh@26 -- # [[ 498453097 == \4\9\8\4\5\3\0\9\7 ]] 00:32:07.128 20:11:54 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 498453097 00:32:07.128 20:11:54 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:32:07.128 20:11:54 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:07.128 Running I/O for 1 seconds... 00:32:08.071 00:32:08.071 Latency(us) 00:32:08.071 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:08.071 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:08.071 nvme0n1 : 1.02 7933.83 30.99 0.00 0.00 15991.62 3549.87 20206.93 00:32:08.071 =================================================================================================================== 00:32:08.071 Total : 7933.83 30.99 0.00 0.00 15991.62 3549.87 20206.93 00:32:08.071 0 00:32:08.071 20:11:55 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:08.071 20:11:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:08.331 20:11:56 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:32:08.331 20:11:56 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:32:08.331 20:11:56 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:08.331 20:11:56 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:08.331 20:11:56 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:08.331 20:11:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:08.592 20:11:56 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:32:08.592 20:11:56 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:08.592 20:11:56 keyring_linux -- keyring/linux.sh@23 -- # return 00:32:08.592 20:11:56 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:08.592 20:11:56 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:32:08.592 20:11:56 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:08.592 20:11:56 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:32:08.592 20:11:56 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:08.592 20:11:56 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:32:08.592 20:11:56 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:08.592 20:11:56 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:08.592 20:11:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:08.592 [2024-07-24 20:11:56.478496] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:08.592 [2024-07-24 20:11:56.479291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb800f0 (107): Transport endpoint is not connected 00:32:08.592 [2024-07-24 20:11:56.480288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb800f0 (9): Bad file descriptor 00:32:08.592 [2024-07-24 20:11:56.481289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:08.592 [2024-07-24 20:11:56.481297] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:08.592 [2024-07-24 20:11:56.481302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:08.592 request: 00:32:08.592 { 00:32:08.592 "name": "nvme0", 00:32:08.592 "trtype": "tcp", 00:32:08.592 "traddr": "127.0.0.1", 00:32:08.592 "adrfam": "ipv4", 00:32:08.592 "trsvcid": "4420", 00:32:08.592 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:08.592 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:08.592 "prchk_reftag": false, 00:32:08.592 "prchk_guard": false, 00:32:08.592 "hdgst": false, 00:32:08.592 "ddgst": false, 00:32:08.592 "psk": ":spdk-test:key1", 00:32:08.592 "method": "bdev_nvme_attach_controller", 00:32:08.592 "req_id": 1 00:32:08.592 } 00:32:08.592 Got JSON-RPC error response 00:32:08.592 response: 00:32:08.592 { 00:32:08.592 "code": -5, 00:32:08.592 "message": "Input/output error" 00:32:08.592 } 00:32:08.592 20:11:56 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:32:08.592 20:11:56 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:08.592 20:11:56 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:08.592 20:11:56 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:08.592 20:11:56 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:32:08.592 20:11:56 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:08.592 20:11:56 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:32:08.592 20:11:56 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:32:08.592 20:11:56 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:32:08.592 20:11:56 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:08.592 20:11:56 keyring_linux -- keyring/linux.sh@33 -- # sn=498453097 00:32:08.592 20:11:56 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 498453097 00:32:08.592 1 links removed 00:32:08.592 20:11:56 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:08.592 20:11:56 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:32:08.592 20:11:56 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:32:08.592 20:11:56 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:32:08.592 20:11:56 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:32:08.592 20:11:56 keyring_linux -- keyring/linux.sh@33 -- # sn=716563978 00:32:08.592 20:11:56 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 716563978 00:32:08.592 1 links removed 00:32:08.592 20:11:56 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3910489 00:32:08.592 20:11:56 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 3910489 ']' 00:32:08.592 20:11:56 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 3910489 00:32:08.592 20:11:56 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:32:08.592 20:11:56 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:08.592 20:11:56 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3910489 00:32:08.853 20:11:56 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:08.853 20:11:56 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:08.853 20:11:56 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3910489' 00:32:08.853 killing process with pid 3910489 00:32:08.853 20:11:56 keyring_linux -- common/autotest_common.sh@969 -- # kill 3910489 00:32:08.853 Received shutdown signal, test time was about 1.000000 seconds 00:32:08.853 00:32:08.853 Latency(us) 00:32:08.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:08.853 =================================================================================================================== 00:32:08.853 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:08.853 20:11:56 keyring_linux -- common/autotest_common.sh@974 -- # wait 3910489 00:32:08.853 20:11:56 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3910468 00:32:08.853 20:11:56 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 3910468 ']' 00:32:08.853 20:11:56 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 3910468 00:32:08.853 20:11:56 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:32:08.853 20:11:56 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:08.853 20:11:56 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3910468 00:32:08.853 20:11:56 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:08.853 20:11:56 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:08.853 20:11:56 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3910468' 00:32:08.853 killing process with pid 3910468 00:32:08.853 20:11:56 keyring_linux -- common/autotest_common.sh@969 -- # kill 3910468 00:32:08.853 20:11:56 keyring_linux -- common/autotest_common.sh@974 -- # wait 3910468 00:32:09.114 00:32:09.114 real 0m4.893s 00:32:09.114 user 0m8.483s 00:32:09.114 sys 0m1.240s 00:32:09.114 20:11:56 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:09.114 20:11:56 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:09.114 ************************************ 00:32:09.114 END TEST keyring_linux 00:32:09.114 ************************************ 00:32:09.114 20:11:56 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:32:09.114 20:11:56 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:32:09.114 20:11:56 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:32:09.114 20:11:56 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:32:09.114 20:11:56 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:32:09.114 20:11:56 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:32:09.114 20:11:56 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:32:09.114 20:11:56 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:32:09.114 20:11:56 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:32:09.114 20:11:56 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:32:09.114 20:11:56 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:32:09.114 20:11:56 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:32:09.114 20:11:56 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:32:09.114 20:11:56 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:32:09.114 20:11:56 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:32:09.114 20:11:56 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:32:09.114 20:11:56 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:32:09.114 20:11:56 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:09.114 20:11:56 -- common/autotest_common.sh@10 -- # set +x 00:32:09.114 20:11:56 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:32:09.114 20:11:56 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:32:09.114 20:11:56 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:32:09.114 20:11:56 -- common/autotest_common.sh@10 -- # set +x 00:32:17.259 INFO: APP EXITING 00:32:17.259 INFO: killing all VMs 00:32:17.259 INFO: killing vhost app 00:32:17.259 WARN: no vhost pid file found 00:32:17.259 INFO: EXIT DONE 00:32:19.805 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:32:19.805 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:32:19.805 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:32:19.805 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:32:19.805 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:32:20.066 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:32:20.066 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:32:20.066 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:32:20.066 0000:65:00.0 (144d a80a): Already using the nvme driver 00:32:20.066 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:32:20.066 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:32:20.066 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:32:20.066 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:32:20.066 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:32:20.066 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:32:20.066 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:32:20.066 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:32:24.270 Cleaning 00:32:24.270 Removing: /var/run/dpdk/spdk0/config 00:32:24.270 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:24.270 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:24.270 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:24.270 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:24.270 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:24.271 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:24.271 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:24.271 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:24.271 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:24.271 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:24.271 Removing: /var/run/dpdk/spdk1/config 00:32:24.271 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:24.271 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:24.271 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:24.271 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:24.271 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:24.271 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:24.271 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:24.271 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:24.271 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:24.271 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:24.271 Removing: /var/run/dpdk/spdk1/mp_socket 00:32:24.271 Removing: /var/run/dpdk/spdk2/config 00:32:24.271 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:24.271 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:24.271 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:24.271 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:24.271 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:24.271 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:24.271 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:24.271 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:24.271 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:24.271 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:24.271 Removing: /var/run/dpdk/spdk3/config 00:32:24.271 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:24.271 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:24.271 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:24.271 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:24.271 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:24.271 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:24.271 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:24.271 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:24.271 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:24.271 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:24.271 Removing: /var/run/dpdk/spdk4/config 00:32:24.271 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:24.271 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:24.271 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:24.271 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:24.271 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:24.271 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:24.271 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:24.271 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:24.271 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:24.271 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:24.271 Removing: /dev/shm/bdev_svc_trace.1 00:32:24.271 Removing: /dev/shm/nvmf_trace.0 00:32:24.271 Removing: /dev/shm/spdk_tgt_trace.pid3457705 00:32:24.271 Removing: /var/run/dpdk/spdk0 00:32:24.271 Removing: /var/run/dpdk/spdk1 00:32:24.271 Removing: /var/run/dpdk/spdk2 00:32:24.271 Removing: /var/run/dpdk/spdk3 00:32:24.271 Removing: /var/run/dpdk/spdk4 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3456055 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3457705 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3458233 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3459897 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3460068 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3461354 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3461470 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3461826 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3462741 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3463494 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3463842 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3464104 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3464382 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3464752 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3465104 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3465453 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3465688 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3466902 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3470229 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3470550 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3470903 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3471235 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3471609 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3471861 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3472318 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3472347 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3472693 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3473017 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3473069 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3473401 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3473842 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3474193 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3474467 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3478923 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3484117 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3496044 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3496830 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3501894 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3502252 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3507423 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3514920 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3518024 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3530504 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3541195 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3543390 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3544543 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3565742 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3570422 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3623514 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3629884 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3637055 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3644242 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3644244 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3645253 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3646256 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3647260 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3647929 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3647935 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3648272 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3648288 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3648411 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3649462 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3650497 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3651602 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3652212 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3652306 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3652598 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3653807 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3655150 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3665133 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3697432 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3702587 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3704523 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3706872 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3706910 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3707316 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3707507 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3708134 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3710851 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3711926 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3712553 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3715018 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3715862 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3716760 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3721677 00:32:24.271 Removing: /var/run/dpdk/spdk_pid3733671 00:32:24.531 Removing: /var/run/dpdk/spdk_pid3738578 00:32:24.531 Removing: /var/run/dpdk/spdk_pid3745792 00:32:24.531 Removing: /var/run/dpdk/spdk_pid3747304 00:32:24.531 Removing: /var/run/dpdk/spdk_pid3749137 00:32:24.531 Removing: /var/run/dpdk/spdk_pid3754247 00:32:24.531 Removing: /var/run/dpdk/spdk_pid3759248 00:32:24.531 Removing: /var/run/dpdk/spdk_pid3768585 00:32:24.531 Removing: /var/run/dpdk/spdk_pid3768589 00:32:24.531 Removing: /var/run/dpdk/spdk_pid3773621 00:32:24.531 Removing: /var/run/dpdk/spdk_pid3773935 00:32:24.531 Removing: /var/run/dpdk/spdk_pid3774089 00:32:24.531 Removing: /var/run/dpdk/spdk_pid3774632 00:32:24.531 Removing: /var/run/dpdk/spdk_pid3774637 00:32:24.531 Removing: /var/run/dpdk/spdk_pid3780019 00:32:24.531 Removing: /var/run/dpdk/spdk_pid3780837 00:32:24.531 Removing: /var/run/dpdk/spdk_pid3786003 00:32:24.531 Removing: /var/run/dpdk/spdk_pid3789359 00:32:24.532 Removing: /var/run/dpdk/spdk_pid3795735 00:32:24.532 Removing: /var/run/dpdk/spdk_pid3802265 00:32:24.532 Removing: /var/run/dpdk/spdk_pid3812224 00:32:24.532 Removing: /var/run/dpdk/spdk_pid3821063 00:32:24.532 Removing: /var/run/dpdk/spdk_pid3821066 00:32:24.532 Removing: /var/run/dpdk/spdk_pid3843562 00:32:24.532 Removing: /var/run/dpdk/spdk_pid3844379 00:32:24.532 Removing: /var/run/dpdk/spdk_pid3845165 00:32:24.532 Removing: /var/run/dpdk/spdk_pid3845917 00:32:24.532 Removing: /var/run/dpdk/spdk_pid3846969 00:32:24.532 Removing: /var/run/dpdk/spdk_pid3847654 00:32:24.532 Removing: /var/run/dpdk/spdk_pid3848342 00:32:24.532 Removing: /var/run/dpdk/spdk_pid3849028 00:32:24.532 Removing: /var/run/dpdk/spdk_pid3854070 00:32:24.532 Removing: /var/run/dpdk/spdk_pid3854405 00:32:24.532 Removing: /var/run/dpdk/spdk_pid3861453 00:32:24.532 Removing: /var/run/dpdk/spdk_pid3861809 00:32:24.532 Removing: /var/run/dpdk/spdk_pid3864477 00:32:24.532 Removing: /var/run/dpdk/spdk_pid3872322 00:32:24.532 Removing: /var/run/dpdk/spdk_pid3872415 00:32:24.532 Removing: /var/run/dpdk/spdk_pid3878356 00:32:24.532 Removing: /var/run/dpdk/spdk_pid3880705 00:32:24.532 Removing: /var/run/dpdk/spdk_pid3882910 00:32:24.532 Removing: /var/run/dpdk/spdk_pid3884404 00:32:24.532 Removing: /var/run/dpdk/spdk_pid3886830 00:32:24.532 Removing: /var/run/dpdk/spdk_pid3888132 00:32:24.532 Removing: /var/run/dpdk/spdk_pid3898089 00:32:24.532 Removing: /var/run/dpdk/spdk_pid3898750 00:32:24.532 Removing: /var/run/dpdk/spdk_pid3899414 00:32:24.532 Removing: /var/run/dpdk/spdk_pid3902193 00:32:24.532 Removing: /var/run/dpdk/spdk_pid3902701 00:32:24.532 Removing: /var/run/dpdk/spdk_pid3903373 00:32:24.532 Removing: /var/run/dpdk/spdk_pid3908096 00:32:24.532 Removing: /var/run/dpdk/spdk_pid3908239 00:32:24.532 Removing: /var/run/dpdk/spdk_pid3909857 00:32:24.532 Removing: /var/run/dpdk/spdk_pid3910468 00:32:24.532 Removing: /var/run/dpdk/spdk_pid3910489 00:32:24.532 Clean 00:32:24.793 20:12:12 -- common/autotest_common.sh@1451 -- # return 0 00:32:24.793 20:12:12 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:32:24.793 20:12:12 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:24.793 20:12:12 -- common/autotest_common.sh@10 -- # set +x 00:32:24.793 20:12:12 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:32:24.793 20:12:12 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:24.793 20:12:12 -- common/autotest_common.sh@10 -- # set +x 00:32:24.793 20:12:12 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:24.793 20:12:12 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:24.793 20:12:12 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:24.793 20:12:12 -- spdk/autotest.sh@395 -- # hash lcov 00:32:24.793 20:12:12 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:24.793 20:12:12 -- spdk/autotest.sh@397 -- # hostname 00:32:24.793 20:12:12 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:25.054 geninfo: WARNING: invalid characters removed from testname! 00:32:51.676 20:12:36 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:51.676 20:12:39 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:53.059 20:12:40 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:54.969 20:12:42 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:56.881 20:12:44 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:58.793 20:12:46 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:00.176 20:12:47 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:00.176 20:12:47 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:00.176 20:12:47 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:00.176 20:12:47 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:00.176 20:12:47 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:00.176 20:12:47 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.176 20:12:47 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.176 20:12:47 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.176 20:12:47 -- paths/export.sh@5 -- $ export PATH 00:33:00.176 20:12:47 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.176 20:12:47 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:33:00.176 20:12:47 -- common/autobuild_common.sh@447 -- $ date +%s 00:33:00.176 20:12:47 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721844767.XXXXXX 00:33:00.176 20:12:47 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721844767.ZXkwzO 00:33:00.176 20:12:47 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:33:00.176 20:12:47 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:33:00.176 20:12:47 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:33:00.176 20:12:47 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:00.176 20:12:47 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:00.176 20:12:47 -- common/autobuild_common.sh@463 -- $ get_config_params 00:33:00.176 20:12:47 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:33:00.177 20:12:47 -- common/autotest_common.sh@10 -- $ set +x 00:33:00.177 20:12:47 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:33:00.177 20:12:47 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:33:00.177 20:12:47 -- pm/common@17 -- $ local monitor 00:33:00.177 20:12:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:00.177 20:12:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:00.177 20:12:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:00.177 20:12:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:00.177 20:12:47 -- pm/common@21 -- $ date +%s 00:33:00.177 20:12:47 -- pm/common@21 -- $ date +%s 00:33:00.177 20:12:47 -- pm/common@25 -- $ sleep 1 00:33:00.177 20:12:47 -- pm/common@21 -- $ date +%s 00:33:00.177 20:12:47 -- pm/common@21 -- $ date +%s 00:33:00.177 20:12:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721844767 00:33:00.177 20:12:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721844767 00:33:00.177 20:12:47 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721844767 00:33:00.177 20:12:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721844767 00:33:00.177 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721844767_collect-vmstat.pm.log 00:33:00.177 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721844767_collect-cpu-load.pm.log 00:33:00.177 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721844767_collect-cpu-temp.pm.log 00:33:00.177 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721844767_collect-bmc-pm.bmc.pm.log 00:33:01.120 20:12:48 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:33:01.120 20:12:48 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:33:01.120 20:12:48 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:01.120 20:12:48 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:01.120 20:12:48 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:01.120 20:12:48 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:01.120 20:12:48 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:01.120 20:12:48 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:01.120 20:12:48 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:01.120 20:12:49 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:01.120 20:12:49 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:01.120 20:12:49 -- pm/common@29 -- $ signal_monitor_resources TERM 00:33:01.120 20:12:49 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:33:01.120 20:12:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:01.120 20:12:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:33:01.120 20:12:49 -- pm/common@44 -- $ pid=3923435 00:33:01.120 20:12:49 -- pm/common@50 -- $ kill -TERM 3923435 00:33:01.120 20:12:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:01.120 20:12:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:33:01.120 20:12:49 -- pm/common@44 -- $ pid=3923436 00:33:01.120 20:12:49 -- pm/common@50 -- $ kill -TERM 3923436 00:33:01.120 20:12:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:01.120 20:12:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:33:01.120 20:12:49 -- pm/common@44 -- $ pid=3923438 00:33:01.120 20:12:49 -- pm/common@50 -- $ kill -TERM 3923438 00:33:01.120 20:12:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:01.120 20:12:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:33:01.120 20:12:49 -- pm/common@44 -- $ pid=3923461 00:33:01.120 20:12:49 -- pm/common@50 -- $ sudo -E kill -TERM 3923461 00:33:01.120 + [[ -n 3336207 ]] 00:33:01.120 + sudo kill 3336207 00:33:01.131 [Pipeline] } 00:33:01.148 [Pipeline] // stage 00:33:01.154 [Pipeline] } 00:33:01.173 [Pipeline] // timeout 00:33:01.178 [Pipeline] } 00:33:01.196 [Pipeline] // catchError 00:33:01.201 [Pipeline] } 00:33:01.217 [Pipeline] // wrap 00:33:01.224 [Pipeline] } 00:33:01.240 [Pipeline] // catchError 00:33:01.248 [Pipeline] stage 00:33:01.250 [Pipeline] { (Epilogue) 00:33:01.264 [Pipeline] catchError 00:33:01.265 [Pipeline] { 00:33:01.280 [Pipeline] echo 00:33:01.282 Cleanup processes 00:33:01.287 [Pipeline] sh 00:33:01.578 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:01.578 3923540 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:33:01.578 3923982 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:01.593 [Pipeline] sh 00:33:01.879 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:01.879 ++ grep -v 'sudo pgrep' 00:33:01.879 ++ awk '{print $1}' 00:33:01.879 + sudo kill -9 3923540 00:33:01.892 [Pipeline] sh 00:33:02.176 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:14.421 [Pipeline] sh 00:33:14.707 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:14.707 Artifacts sizes are good 00:33:14.733 [Pipeline] archiveArtifacts 00:33:14.755 Archiving artifacts 00:33:14.964 [Pipeline] sh 00:33:15.251 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:15.266 [Pipeline] cleanWs 00:33:15.275 [WS-CLEANUP] Deleting project workspace... 00:33:15.275 [WS-CLEANUP] Deferred wipeout is used... 00:33:15.282 [WS-CLEANUP] done 00:33:15.284 [Pipeline] } 00:33:15.302 [Pipeline] // catchError 00:33:15.313 [Pipeline] sh 00:33:15.606 + logger -p user.info -t JENKINS-CI 00:33:15.636 [Pipeline] } 00:33:15.654 [Pipeline] // stage 00:33:15.657 [Pipeline] } 00:33:15.666 [Pipeline] // node 00:33:15.669 [Pipeline] End of Pipeline 00:33:15.692 Finished: SUCCESS